Next Article in Journal
Game-Based Intervention as a Tool for Enhancing Team Adaptation
Previous Article in Journal
Hotel Guest Satisfaction: A Predictive and Discriminant Study Using TripAdvisor Ratings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

When Institutions Cannot Keep up with Artificial Intelligence: Expiration Theory and the Risk of Institutional Invalidation

Management Department, SBS Swiss Business School, Flughafenstrasse 3, 8302 Kloten-Zurich, Switzerland
Adm. Sci. 2025, 15(7), 263; https://doi.org/10.3390/admsci15070263
Submission received: 15 June 2025 / Revised: 2 July 2025 / Accepted: 4 July 2025 / Published: 7 July 2025

Abstract

As Artificial Intelligence systems increasingly surpass or replace traditional human roles, institutions founded on beliefs in human cognitive superiority, moral authority, and procedural oversight encounter a more profound challenge than mere disruption: expiration. This paper posits that, instead of being outperformed, many legacy institutions are becoming epistemically misaligned with the realities of AI-driven environments. To clarify this change, the paper presents the Expiration Theory. This conceptual model interprets institutional collapse not as a market failure but as the erosion of fundamental assumptions amid technological shifts. In addition, the paper introduces the AI Pressure Clock, a diagnostic tool that categorizes institutions based on their vulnerability to AI disruption and their capacity to adapt to it. Through an analysis across various sectors, including law, healthcare, education, finance, and the creative industries, the paper illustrates how specific systems are nearing functional obsolescence while others are actively restructuring their foundational norms. As a conceptual study, the paper concludes by highlighting the theoretical, policy, and leadership ramifications, asserting that institutional survival in the age of AI relies not solely on digital capabilities but also on the capacity to redefine the core principles of legitimacy, authority, and decision-making.

1. Introduction

This conceptual paper presents the Expiration Theory, which explains that institutions fail not only because of inefficiency or poor performance, but also due to misalignment in knowledge resulting from algorithmic pressure. As artificial intelligence (AI) systems become increasingly advanced—capable of generating text, making legal predictions, diagnosing illnesses, and allocating credit—traditional institutions that rely on human decision-making face significant challenges. The issue extends beyond automation or efficiency; it is epistemic. It is our understanding of knowledge itself. Many institutional models are based on outdated assumptions: that human judgment is essential, that moral authority belongs solely to humans, and that procedural rules ensure legitimacy. However, AI presents a systemic challenge by questioning the very foundations of human-centered systems (Floridi & Cowls, 2019; Danaher, 2019). Predictive algorithms and automated decision-making challenge established beliefs (Eubanks, 2018; Pasquale, 2015), causing many legacy systems to fall behind the rapid advancements and reasoning of AI (Brynjolfsson & McAfee, 2014; Rahwan et al., 2019).
This paper examines how AI introduces a new kind of institutional vulnerability—one not rooted in performance failure or stakeholder dissatisfaction, but in what we call assumption decay: the gradual erosion of the foundational beliefs, interpretive frameworks, and epistemic justifications that institutions rely upon to make sense of their functions. Unlike legitimacy loss, which reflects the external withdrawal of support, assumption decay denotes an internal misalignment between institutional routines and the algorithmic environments in which they now operate.
While existing theories of disruption (Christensen, 1997), institutional path dependence (Mahoney, 2000), and legitimacy theory (Suchman, 1995) help explain organizational change and survival under technological pressure, they offer limited tools to understand when institutions become conceptually obsolete—continuing to operate procedurally while losing coherence with the world they aim to govern.
To address this, we introduce Expiration Theory—a conceptual framework that explains institutional breakdown as a function of assumption decay rather than structural collapse. We argue that in AI-mediated contexts, institutions risk expiration when they can no longer interpret, justify, or oversee the systems they rely on. This presents an epistemic crisis that cannot be resolved solely through performance optimization or compliance reform. To implement this idea, the paper introduces the AI Pressure Clock, a model that assesses institutions based on their vulnerability to AI disruption and their ability to adapt epistemically.
The core research question guiding this inquiry is as follows:
What makes specific leadership models, governance systems, and societal institutions expire under the accelerating influence of AI?
The paper demonstrates how the Expiration Theory can reveal early signs of institutional obsolescence, using examples from finance, law, education, and the creative arts. It suggests ways to validate this framework, such as tracking qualitative indicators of assumption decay and conducting sector-specific expiration audits. This paper contributes to AI governance, institutional resilience, and leadership studies by framing AI as a meta-disruptor capable of invalidating the very premises on which institutions are built (Binns et al., 2018; Grint, 2008; Zuboff, 2019). As AI systems transform how we think, make decisions, and exercise authority, existing theories on institutional viability must adapt.
This paper explores Expiration Theory within the context of disruption, legitimacy, and AI governance, identifying gaps in institutional responses to epistemic pressure. It introduces key concepts, such as assumption decay and the AI Pressure Clock, for evaluating institutional risk. Real-world examples from law, finance, education, and humanitarian sectors are used, focusing on both the Global North and South. The paper discusses policy implications, including the roles of the Ultimate Responsibility Owner and institutional sandboxes. It concludes by addressing how institutions can realign epistemically and ethically in the age of machine reasoning.

2. Literature Review

This section reviews research on disruption theory, institutional change, and the impact of artificial intelligence (AI). While existing studies provide insight into technological displacement and organizational adaptation, they do not adequately explain why entire institutions—grounded in particular knowledge, authority, and moral frameworks—become obsolete in AI-driven environments.

2.1. Disruption and Institutional Inertia

Disruption theory, as formulated by Christensen (1997), describes how new entrants displace dominant firms by targeting neglected markets and incrementally improving their offerings. This model has been expanded to include digital transformation (Roblek et al., 2021), policy and ecosystem dynamics (Chemma, 2021; Sun & Zhou, 2024), and organizational capabilities (Guo et al., 2023; Feng et al., 2022). Yet these frameworks are primarily performance-based, focusing on markets and competition.
When applied to institutions—such as legal systems, educational bodies, and governance frameworks—disruption theory proves inadequate. Institutions are governed by path-dependent norms, deeply embedded logics, and slow-moving assumptions (Mahoney, 2000; Pieczewski, 2023). As AI accelerates the pace of decision-making and alters interpretive standards, institutions that rely on human-centric authority and procedural deliberation face difficulty adapting—not because they are inefficient, but because they are epistemically misaligned.

2.2. AI as a Meta-Disruptor

Unlike previous technologies that augmented human labor, AI challenges the necessity of human cognition in core functions like reasoning, classification, and judgment. Machine learning systems now outperform humans in diagnostics (Esteva et al., 2017), risk assessment (Kleinberg et al., 2017), and strategic planning (Silver et al., 2017). These advances shift authority away from human-centered logic toward algorithmic governance (Rahwan et al., 2019), undermining the institutional premises of legitimacy, deliberation, and explainability (Doshi-Velez & Kim, 2017).
This shift is not just disruptive, it is epistemic. AI redefines what counts as valid knowledge, credible judgment, and accountable decision-making. As such, it introduces a meta-disruption: invalidating institutional design from the level of assumptions rather than outputs. The challenge for institutions is not merely technological substitution but the erosion of the conceptual frameworks that justify their existence.

2.3. Gaps in Current Research

Despite the growing literature on AI ethics and algorithmic accountability (Floridi & Cowls, 2019; Mittelstadt et al., 2016), much of it focuses on model-level issues, such as bias, fairness, and transparency, without addressing how institutions can lose epistemic integrity while remaining structurally intact. Similarly, research on institutional adaptation emphasizes operational flexibility (Oliver, 1991; Scott, 2014) but often overlooks whether existing assumptions align with the realities of AI-driven systems. Recent studies on institutional incoherence and interpretive breakdowns (Stoltz et al., 2019; Krajnović, 2020) highlight this misalignment but lack a cohesive conceptual framework. Our research aims to fill this gap by developing a model that explains how institutions deteriorate not due to performance failures, but because of a loss of conceptual coherence.
A framework is necessary to identify where institutions may be failing, not just because of AI’s advantages but due to a mismatch between human-centered designs and the logic of intelligent machines.
Existing research on technological disruption typically explains institutional change through market competition, performance displacement, or efficiency gains (Christensen, 1997; Roblek et al., 2021). However, these accounts often miss the more profound epistemic impacts of artificial intelligence. Disruption theory posits that institutions can adapt by utilizing the same cognitive and normative frameworks that initially shaped them. In contrast, the Expiration Theory suggests that AI creates a paradigm shift, rendering these frameworks not only insufficient but also conceptually outdated. This is a crucial distinction: disruption implies replacement within a shared logic, whereas expiration signifies a breakdown of that logic itself (de Vaujany et al., 2020; Wen, 2024).
Recent AI governance studies stress the need for institutions to reassess both their operational abilities and their cognitive and epistemic foundations. For example, Oesterling et al. (2024) propose computational models to analyze institutional dynamics under algorithmic pressure, while Stoltz et al. (2019) point out that AI-driven decision-making can lead to a collapse of institutional coherence. Furthermore, Coşkun and Arslan (2024) argue that legitimacy in institutions during the AI era is increasingly contingent upon epistemic transparency and adaptability. These findings underscore the necessity for frameworks like Expiration Theory, which address institutional failure through the lens of assumption decay and conceptual obsolescence, rather than relying solely on performance metrics.

2.4. Limitations of Existing Institutional Frameworks

Mainstream institutional theories—such as isomorphism, legitimacy theory, and resilience theory—struggle to address the challenges AI presents to institutional logic.
Isomorphism (DiMaggio & Powell, 1983) explains how organizations converge due to external pressures. However, during AI disruption, imitating traditional structures may reinforce outdated logics instead of facilitating adaptation (Coşkun & Arslan, 2024). For instance, replicating human-centric processes in algorithmic settings can create a facade of legitimacy while hiding deeper mismatches.
Legitimacy theory (Suchman, 1995) centers on stakeholder perceptions, suggesting institutions fail without external support. However, the Expiration Theory posits that institutions can cease to be relevant even if they maintain an outward appearance of legitimacy, as their reasoning frameworks no longer align with decision-making in AI contexts (Oesterling et al., 2024; Stoltz et al., 2019).
Resilience frameworks (Oliver, 1991; Scott, 2014) focus on maintaining continuity and adapting to shocks. However, AI-driven changes may necessitate abandoning core identities instead of preserving them. In this case, resilience can be counterproductive when foundational assumptions about human authority, deliberation, and control become invalid (Gidley, 2020).
Adjacent frameworks in institutional theory—such as institutional layering (Thelen, 2004), drift (Hacker, 2004), and punctuated equilibrium (Baumgartner & Jones, 2010)—explain how institutions evolve incrementally or react to significant disruptions. Layering involves adding new practices to existing structures; drift involves subtle shifts in interpretation over time; and punctuated equilibrium points to periods of sudden change after long durations of stability.
While these models help us understand institutional transformation, they generally assume institutions maintain epistemic continuity, meaning they can still interpret their environments and uphold their authority. However, they often overlook that institutions may seem persistent yet become disconnected from the systems they govern.
Expiration Theory addresses this by introducing assumption decay, which refers to the weakening of the core beliefs and justifications that maintain institutions’ coherence. Unlike changes in structure or legitimacy, assumption decay can make institutions unable to understand the very systems they are meant to regulate. This perspective prompts exploration of what makes an institution epistemically obsolete in the age of AI, even when its operations appear unchanged.
In recent years, several AI governance models have emerged, including the OECD AI Governance Maturity Model (OECD, 2024b), ISACA’s Enterprise AI Governance Program Framework (ISACA, 2024), and the EU AI Act’s risk-based classification scheme (EU, 2024). While these frameworks help with oversight, accountability, and compliance, they largely focus on procedure—addressing stages of adoption, risk categorization, and human involvement—without tackling the underlying issues that lead to institutional weaknesses in dealing with AI. Expiration Theory adds value by examining why foundational assumptions of institutions may clash with algorithmic environments and when cognitive realignment is necessary.

3. Conceptualizing Expiration Theory

This section presents the Expiration Theory, a model that explains how institutions fail not due to poor performance, but because of decaying assumptions and misaligned knowledge in the context of artificial intelligence.

3.1. Origins and Construction of the Theory

Expiration Theory is developed as a deductive theoretical extension that synthesizes insights from disruption theory, institutional path dependence, and AI governance literature. It draws on a triangulation of three foundations:
1
Conceptual synthesis of existing literature on technological disruption (Christensen, 1997; Roblek et al., 2021), institutional rigidity (Mahoney, 2000; Oliver, 1991), and legitimacy failure (Suchman, 1995).
2
Thematic patterns observed in prior empirical case studies from sectors such as law, education, finance, and healthcare—where institutions remained structurally intact but epistemically incoherent (e.g., COMPAS in judicial systems, algorithmic hiring in HR, opaque diagnostics in healthcare).
3
Analytical extrapolation of recent findings in AI ethics, organizational incoherence, and assumption invalidation (Stoltz et al., 2019; Oesterling et al., 2024; Gidley, 2020).
The theory is based on a conceptual generalization across various disciplines and cases, rather than a single dataset or empirical study. It draws support from observed patterns of institutional mismatch in rapidly digitizing environments.
At the core of this theory lies a set of assumptions, such as
  • Cognitive supremacy of humans (e.g., leadership, law);
  • Predictability and rule-following in governance;
  • Moral centrality of human actors in decision-making;
  • Hierarchical control and slow deliberation in bureaucratic structures.
AI challenges these assumptions by introducing
  • Superior pattern recognition and autonomous decision-making;
  • Opaque reasoning mechanisms that defy explanation or justification;
  • Distributed, decentralized models of intelligence;
  • Acceleration in decision cycles that outpace human institutional rhythms.
Thus, Expiration Theory serves as a diagnostic tool that helps researchers and decision-makers understand when and why institutions start losing relevance while still maintaining their structural forms.

3.2. Expiration vs. Disruption

Disruption and expiration are different processes. Disruption occurs when one entity outperforms another within the same framework, often replacing it (Wen, 2024). Expiration, on the other hand, entails a fundamental shift in the rules of meaning and legitimacy, resulting in a breakdown of the existing paradigm (de Vaujany et al., 2020).
Disruption is a well-known concept, explained by Christensen’s (1997) theory of disruptive innovation. This theory illustrates how new competitors can displace established companies by targeting previously underutilized market segments and rapidly enhancing their offerings. Examples of this include the transition from Blockbuster to Netflix. However, its limitations become apparent with rapid technological advancements, such as AI, which can transform multiple sectors and outpace conventional innovation cycles (Wen, 2024).
Expiration is less studied but is often linked to institutional inertia and path dependence. Institutions, shaped by past choices, can become stuck in outdated patterns, making them susceptible to disruptive technologies that challenge their core assumptions (Coşkun & Arslan, 2024). This inflexibility can cause existing systems to collapse and new paradigms to emerge.
Table 1 illustrates that traditional disruption theory posits that established companies can endure by swiftly adapting to new circumstances. Conversely, the Expiration Theory contends that, at times, mere adaptation may prove insufficient due to a misalignment between the fundamental principles of the business model and the actual state of reality.

3.3. Illustrative Case: From Disruption to Expiration in Recruitment Systems

To clarify the distinction between disruption and expiration, consider the evolution of recruitment systems in large organizations.
Initially, disruption occurred with platforms like LinkedIn and AI-powered applicant tracking systems (ATSs), which surpassed traditional HR teams in sourcing and screening candidates. These tools were faster and data-driven, replacing manual processes based on intuition. During this phase, HR departments adapted by integrating these tools while retaining their authority, aligning with the concept of disruption, where new entrants outperform existing players.
However, a significant shift—expiration, occurred when algorithmic hiring models defined candidate suitability with opaque and ethically contentious criteria. As these models grew, many HR professionals struggled to understand the reasoning behind hiring decisions, reducing their role to mere oversight of these automated processes. Traditional hiring methods, reliant on interviews and intuition, became irrelevant as HR lost its ability to interpret or challenge these decisions.
This illustrates expiration: while HR remains structurally intact, it has lost control over judgment and legitimacy. What once was a critical center of decision-making has turned into a procedural formality, signaling stability outwardly but facing internal challenges.
This example highlights the importance of the Expiration Theory, which shows that institutions can not only be outperformed but also conceptually outdated, indicating a need for a more profound rethinking of roles, assumptions, and authority beyond simple adaptation.

3.4. Expiration as Epistemic Invalidation

At its core, expiration does not equate to decline, inefficiency, or failure; rather, it indicates that an institution can no longer effectively understand its environment (Gidley, 2020; Oesterling et al., 2024). This concept differs from traditional views of institutional decline. Expiration often arises from institutional inertia and path dependence, where past decisions restrict current options and trap institutions in outdated practices (Jahan et al., 2020). This rigidity makes institutions vulnerable to significant changes that challenge their fundamental operational beliefs (Stoltz et al., 2019). Examples of vulnerabilities include the following:
  • A leadership theory based on charisma and emotional intelligence may no longer suffice in decision-making environments characterized by algorithmic input.
  • A legal system designed for analog contexts may become incoherent within a digital landscape that encompasses smart contracts, predictive enforcement, and artificial intelligence judges.
  • Democratic discourse established around print-era deliberation is insufficient to address real-time manipulation conducted through AI-generated deepfakes.

3.5. Tracking Assumption Decay in Institutions

Expiration Theory posits that institutions fail not only because of poor performance but also because their foundational assumptions become misaligned with the realities created by transformative technologies, such as artificial intelligence (Gidley, 2020; Oesterling et al., 2024). These assumptions—such as authority, valid reasoning, and decision-making processes—are often implicit and embedded in routines and cultural norms (Scott, 2014). As AI systems emerge, these assumptions can become outdated or irrational.
Researchers should investigate assumption–environment mismatches, which occur when institutional logics lead to decisions that are increasingly incoherent under AI-mediated conditions. For example, a human-centric risk committee in banking overridden by machine-learning systems highlights the diminishing belief in human cognitive superiority (Stoltz et al., 2019). Similarly, legal systems grappling with opaque AI tools may struggle to maintain procedural fairness and moral authority.
Assumption decay is not always evident in performance metrics. Instead, it can be tracked through qualitative methods, such as interviews, discourse analysis, and decision audits, which reveal discrepancies between established norms and actual practices (Jahan et al., 2020; Siddiki et al., 2019; Ménard, 2022). Conflicts in interpretation—where individuals within an institution disagree on the application of rules—can signal early signs of epistemic drift. These “interpretive breakdowns” (Krajnović, 2020) indicate that an institution’s cognitive framework is misaligned with its operational reality, setting it on a path toward failure.
Expiration Theory suggests creating a future Assumption Decay Index to measure epistemic misalignment within institutions over time. This index would combine insights from path dependence and institutional inertia (Mahoney, 2000; Acemoğlu et al., 2021) with modern understandings of institutional logic, legitimacy crises, and failures to adapt to technological changes (Clapp, 2020; He & Feng, 2019). Case studies, such as those involving Confucius Institutes facing legitimacy issues (Choi & Park, 2021) and changes in land tenure due to digitalization (Özer, 2022), highlight the need to view institutional failure not just as a structural breakdown but as a decline of underlying assumptions that persist even when operations continue.

3.6. Theoretical Distinctions: Expiration Theory in Institutional Collapse and Adaptation

Expiration Theory builds on institutional theory but diverges from mainstream explanations of institutional change. It specifically critiques the adequacy of isomorphism, legitimacy crisis models, and resilience frameworks in addressing AI-induced disruption.
Institutional isomorphism, as explained by DiMaggio and Powell (1983), refers to the process by which organizations become similar due to pressures for legitimacy, competition, or professional norms. While this theory highlights adaptation through imitation, Expiration Theory argues that AI disruption can invalidate the very logic of institutions, making isomorphic adaptation not only ineffective but potentially damaging. Relying on outdated institutional forms in response to AI can worsen misalignment rather than avert collapse.
Legitimacy crisis theories (Suchman, 1995) state that institutions struggle when they lose legitimacy among stakeholders. Expiration Theory expands this view, asserting that AI disrupts legitimacy on a fundamental level, not just diminishing trust but challenging the foundational knowledge and decision-making processes of institutions. For instance, a court might still enjoy public trust while its reliance on opaque AI erodes its epistemic validity. Thus, expiration can occur before any visible loss of legitimacy has occurred.
Resilience frameworks focus on an institution’s ability to adapt and maintain its core identity (Oliver, 1991; Scott, 2014). However, the Expiration Theory posits that some disruptions are unmanageable because the institution’s core assumptions have become outdated. In these cases, mere resilience is inadequate; institutions may need radical redesign or replacement. Thus, this theory introduces a new perspective on institutional decline, highlighting that foundational assumptions can expire well before the structures themselves become unsustainable.
Expiration Theory is influenced by Khanna and Palepu’s (2010) work on institutional voids, but it changes the focus. While institutional voids point to the absence of market infrastructure, such as contract enforcement and credit access, that stifle growth, expiration highlights the decline of assumptions in existing institutions. The issue is not the absence of an institution but rather the presence of one whose relevance has faded in an AI-driven environment.
These examples demonstrate that the institution’s logic is at odds with its environment, and no amount of incremental reform can bridge that gap.

3.7. Criteria for Institutional Expiration

Based on the previous discussion, four diagnostic criteria determine if a system is nearing expiration:
  • Assumption Obsolescence: The fundamental beliefs justifying the institution are no longer valid, either empirically or normatively.
  • Structural Mismatch: The institution’s structure is too slow and rigid to effectively engage with AI-driven environments.
  • Loss of Trust: Users, citizens, or stakeholders start to doubt the institution’s relevance and authority.
  • Inability to Interpret Change: The institution lacks the necessary tools to understand, predict, or assess changes resulting from AI.
Figure 1 clearly illustrates the relationship between the diagnostic criteria and the main research question. These criteria form the foundation of the AI Pressure Clock, which will be explained in the next section. It transforms this theoretical insight into a practical diagnostic framework for real-world systems.

3.8. Methodological Approach

This paper introduces Expiration Theory and the AI Pressure Clock as a conceptual study using deductive reasoning. Instead of relying on data or empirical testing, the theory synthesizes interdisciplinary literature, sectoral patterns, and insights from frameworks of disruption and institutional change.
Conceptual Reasoning
This work adopts a deductive conceptual approach, starting with established theories: disruption theory (Christensen, 1997), institutional legitimacy (Suchman, 1995), and AI governance (Floridi & Cowls, 2019). It highlights their limitations in explaining institutional decline in AI contexts. From these gaps, the Expiration Theory is introduced to explain institutional misalignment as a result of assumption decay rather than performance failure.
Derivation of the AI Pressure Clock
The AI Pressure Clock was developed using a unique conceptual framework that integrates two intersecting variables:
1
Degree of AI Disruption (drawn from literature on algorithmic displacement and automation pressure).
2
Institutional Resilience (adapted from the institutional literature on adaptation, legitimacy, and epistemic flexibility).
The quadrant model was developed without collecting primary empirical data, such as interviews, surveys, or observational studies. It is based solely on a deductive synthesis of existing literature and case examples from various sectors, including law, healthcare, finance, education, and the creative industries.
Toward Empirical Operationalization
To enhance its future utility, we outline three pathways for operationalizing and validating the model:
  • Case Study Design: Researchers can assign specific institutions to the AI Pressure Clock according to their level of AI integration and ability to adapt, such as through procedural reform, explainability audits, and staff retraining.
  • Indicator Development: Factors such as algorithmic opacity, rapid decision-making, and procedural challenges can be turned into metrics for assessing institutional expiration risk.
  • Longitudinal Studies: Monitoring changes in institutional coherence, perceptions of legitimacy, and structural reform over time will help evaluate the model’s predictive and diagnostic capabilities.
By clarifying these conceptual foundations and research pathways, this section aims to increase the methodological transparency and theoretical robustness of the proposed framework.

4. The AI Pressure Clock Framework

This section introduces the AI Pressure Clock, a framework for assessing institutional vulnerability in the face of AI disruption. It maps institutions based on two factors: the level of AI disruption they experience and their resilience, or ability to adapt while maintaining legitimacy and relevance. Institutions differ in their response to AI; some face critical stress, while others show stability but are becoming vulnerable. The AI Pressure Clock highlights four quadrants to identify where immediate redesign is necessary.
The AI Pressure Clock (Figure 2) maps institutions across two dimensions:
  • AI Disruption Level (vertical axis): The extent to which artificial intelligence technologies confront, substitute, or reorganize the fundamental functions of an institution.
  • Institutional Resilience (horizontal axis): The institution’s capacity for adaptation, response, reform, and realignment by the epistemic changes introduced by artificial intelligence.
Each quadrant reflects a distinct risk and reform profile:
  • Quadrant I—AI-Disrupted and Fragile
High AI Disruption, Low Resilience
Institutions in this zone are nearing expiration. Their operating models clash with AI logic, and they cannot adapt to redesign themselves.
Examples:
  • Legal systems depend on slow, precedent-based decision-making.
  • Electoral oversight is vulnerable to algorithmic manipulation.
  • Traditional HR departments struggle with algorithmic hiring processes.
Interpretation: Time is rapidly diminishing. A radical redesign is required.
  • Quadrant II—AI-disrupted but Adaptive
High AI Disruption, High Resilience
These institutions are highly engaged with AI and are effectively adapting to it, making them strong examples of transformation in the era of AI.
Examples:
  • Healthcare systems are integrating diagnostic AI.
  • Logistics sectors are optimizing with machine learning.
  • Financial firms are using AI for compliance and risk management.
Interpretation: The level of disruption is considerable; however, survival can be achieved through strategic adaptation.
  • Quadrant III—Stable but Vulnerable
Low AI Disruption, Low Resilience
These institutions seem stable currently, but they are fragile. A rapid rise in AI use could quickly make them outdated.
Examples:
  • Higher education systems based on rigid curricula.
  • Civil service frameworks using traditional methods.
  • Faith-based governance organizations with established hierarchies.
Interpretation: Time may appear plentiful; however, complacency presents significant dangers.
  • Quadrant IV—Low-Threat Zones
Low AI Disruption, High Resilience
Current AI developments have the least impact on these areas and can adapt as needed. However, they should not be overlooked, as advanced AI could eventually reach into these domains.
Examples:
  • Creative sectors (e.g., live performing arts).
  • Niche artisan careers.
  • Community-driven governance frameworks with integrated redundancy.
Interpretation: Presently secure; however, it is imperative to monitor potential future encroachments of artificial intelligence.
  • Strategic Utility
The AI Pressure Clock is designed to
  • Diagnose where the risk of expiration is highest.
  • Help decision-makers prioritize transformation efforts.
  • Guide researchers in identifying case studies of fragile versus adaptive institutions.
  • Develop an early-warning system to identify epistemic misalignment between human systems and AI environments.

4.1. Interpretation and Use over Time

The AI Pressure Clock serves as a dynamic diagnostic framework, rather than a fixed classification tool. Institutions can move between quadrants based on their levels of AI disruption and adaptability.
For example, a traditional university in Quadrant III (Stable but Vulnerable) could move to Quadrant I (AI-Disrupted and Fragile) if algorithmic education systems replace its core functions without curriculum changes. On the other hand, entities like courts using opaque AI for sentencing can shift from Quadrant I to Quadrant II (AI-Disrupted but Adaptive) by adopting explainable AI, human oversight, and algorithmic audits.
The AI Pressure Clock should be used over time to monitor how institutions adapt to AI challenges through redesign and retraining. Future research might create indicators or dashboards to track these shifts and identify signs of institutional decline.

4.2. Risks of Misuse and Oversimplification

The quadrant typology provides clarity but can oversimplify the complexities of institutions. Institutions often do not fit neatly into one category, and their resilience regarding AI varies across different areas. For example, a healthcare institution might excel in diagnostics but struggle with issues such as ethics or data governance.
Using this framework carelessly may lead to hasty classifications of institutions as outdated or fragile without a proper context. Therefore, policymakers and researchers should view the AI Pressure Clock as a tool for analysis, not a definitive model. Its value lies in identifying patterns of epistemic risk rather than making absolute judgments.
Ultimately, the effectiveness of this framework relies on informed interpretation, dialog across sectors, and ongoing refinement through research.

4.3. Linking the Assumption Decay Index to the AI Pressure Clock

The Assumption Decay Index (ADI) offers a practical approach for applying Expiration Theory. By measuring factors like interpretive capacity, epistemic coherence, and stakeholder trust in algorithms, the ADI can assess how well institutions align in AI-driven settings. It can be mapped to the AI Pressure Clock framework, helping to categorize institutions into one of four categories: Stable and Aligned, Disrupted and Fragile, Resistant but Misaligned, or Expired. For instance, institutions with high assumption decay but low interpretive responsiveness would be categorized as Expired, while those with low decay and strong oversight would be Stable and Aligned. The ADI serves as a valuable tool for identifying epistemic risks, guiding reforms, and evaluating institutional readiness across various sectors.

5. Sectoral Applications of Expiration Theory

This section applies the Expiration Theory and the AI Pressure Clock to illustrate how institutions worldwide are facing epistemic stress due to AI. While much of the current literature focuses on Western contexts, institutional expiration occurs globally, influenced by developmental differences, cultural authority, and governance traditions. We incorporate both Global North and Global South perspectives, as well as institutions often overlooked in discussions on AI disruption, such as religious and humanitarian organizations.

5.1. Legal Systems (Quadrant I: AI-Disrupted and Fragile)

AI tools, such as COMPAS for predictive sentencing in the U.S. and China’s smart court systems, illustrate how traditional legal norms, including transparency and due process, are being challenged by opaque algorithms. Courts frequently use these tools without adequate standards for interpretability or safeguards for appeal.
Reform priority: AI-adapted legal reasoning, explainability mandates, and algorithmic accountability.

5.2. Financial Systems (Quadrant I/II: Diverging)

Banks in Western Europe are struggling with the explainability of AI models used for risk scoring and fraud detection. In contrast, mobile-based micro-lending platforms in Southeast Asia are effectively using behavioral AI to assess informal borrowers, often skipping traditional credit systems (Petrenko, 2025). While these digital-native institutions demonstrate resilience, traditional banks struggle to integrate AI and navigate regulatory complexities.
Reform priority: Human-in-the-loop compliance, AI-aligned governance, and financial inclusion standards.

5.3. Healthcare (Quadrant II: AI-Disrupted but Adaptive)

AI-assisted diagnostics, including radiology and triage systems, are widely adopted in both high- and middle-income countries. In India, AI facilitates retinal screening in remote areas, improving access and changing medical roles (Centre for Eye Research Australia, 2023). The success of these initiatives relies on institutional backing for retraining, interdisciplinary collaboration, and oversight of AI ethics.
Reform priority: Role redesign, explainable diagnostics, and local cultural validation of AI systems.

5.4. Higher Education (Quadrant III: Stable but Vulnerable)

Universities worldwide continue to depend on rigid credentialing, faculty-led knowledge transfer, and degree-based evaluations. In Sub-Saharan Africa, the rise of AI teaching assistant platforms, such as M-Shule, poses a risk to public institutions becoming outdated if they do not update their curricula to keep pace with informal learning innovations.
Reform priority: Modular AI-integrated curricula, micro-credential ecosystems, and institutional retraining.

5.5. Religious Institutions (Quadrant III/IV: Stable but Vulnerable or Low Threat)

Religious systems in the Global South hold significant authority and legitimacy. However, the use of AI-generated sermons, virtual clergy, and algorithmic matchmaking in Islamic and Christian contexts introduces theological, procedural, and pastoral challenges (Ungar-Sargon, 2025). Dependence on AI for spiritual guidance may lead to a shift away from core interpretive traditions.
Reform priority: Clarify doctrinal boundaries, human-led digital ethics boards, and faith-tech dialog frameworks.

5.6. Humanitarian Systems (Quadrant II: AI-Disrupted but Adaptive)

AI is increasingly being utilized in humanitarian work for crisis prediction, food distribution, and identifying refugees. For example, UNHCR uses biometric registration, and the World Food Programme employs AI for resource allocation. While these technologies enhance efficiency and reach, they also raise concerns about consent, errors, and explainability (Coppi et al., 2021).
Reform priority: Ethical accountability boards, algorithmic transparency in aid allocation, and participatory design.
Table 2 summarizes the sectoral analyses by showing how various institutions align with the AI Pressure Clock. It indicates each sector’s current quadrant based on AI disruption levels and institutional resilience, along with the main epistemic risks and necessary reforms to avoid decline. This table enables quick cross-sector comparisons, identifies urgent issues, and highlights targeted interventions necessary to maintain epistemic alignment and institutional legitimacy in an AI-driven environment.

6. Theoretical and Research Implications

The Expiration Theory and the AI Pressure Clock framework contribute to the literature on institutional resilience, technological disruption, and AI governance. We highlight how these concepts challenge existing theories, expand current models, and create new opportunities for interdisciplinary research.

6.1. Theoretical Contributions

1
Epistemic Misalignment as Core Mechanism
  • Expiration Theory views institutional decline due to AI not as a performance failure or loss of trust, but as assumption decay—when an institution’s core beliefs no longer match algorithmic logic.
2
AI as a Meta-Disruptor of Institutional Logic
  • The Expiration Theory highlights that AI can redefine valid knowledge, authority, and procedural legitimacy, demanding cognitive adjustments that extend beyond simple innovation.
3
Complement to Procedural Governance Models
  • Recent AI oversight frameworks, such as the OECD (2024a) AI Principles and the EU AI Act, emphasize the need for transparency and human involvement. Expiration Theory argues that these measures are crucial for ensuring human accountability and conducting epistemic audits, which help prevent assumption decay and maintain institutional coherence.
Expiration Theory clarifies why human oversight and epistemic review are essential, not only to reduce risk but to ensure institutional coherence and legitimacy in AI-influenced areas. Oversight efforts must be paired with a clear understanding of the risks associated with not adopting AI or adopting it superficially. According to Frimpong (2025b), failing to integrate AI effectively can lead institutions into “AI Extinction Zones,” where operational stagnation and poor leadership result in the loss of relevant knowledge. This supports Expiration Theory, highlighting that the risks come not just from technology failing, but also from a disconnect between institutional beliefs and the logic of algorithms.
Thus, the theory contributes to the meta-governance literature by presenting AI as a disruptor not only of roles and routines but also of reasoning systems themselves, prompting institutions to reevaluate their knowledge processes, decision-making, and standards for valid judgment.

6.2. Research Implications

To make Expiration Theory suitable for empirical studies, we categorize methods into three groups:
Qualitative Methods
  • Case Studies: Conduct in-depth studies on institutions facing AI-related epistemic challenges, such as courts using predictive sentencing tools. Use interviews and document analysis to detect early signs of assumption decay.
  • Delphi Panels: Conduct structured expert elicitation with lawyers, data scientists, and public administrators to refine diagnostic criteria and the Assumption Decay Index (ADI).
  • Discourse and Ethnography: Examine internal communications and rituals to understand how institutional narratives evolve or struggle to adapt in response to algorithmic decision-making.
Quantitative Methods
  • Assumption Decay Index (ADI) Surveys: Develop multi-item scales (such as interpretability, trust, and procedural alignment) to measure assumption decay in organizations.
  • Longitudinal Panel Studies: Track institutions’ ADI and performance/legitimacy metrics over time to test predictive validity of the AI Pressure Clock.
  • Comparative Mapping: Use cluster analysis or latent class modeling to position institutions on the clock according to their composite ADI and resilience scores.
Experimental and Simulation Methods
  • Role-Play Simulations: Design decision-making exercises where participants manage AI recommendations with different levels of transparency and override conditions, focusing on any interpretive challenges that arise.
  • Sandbox Field Experiments: Collaborate with agencies to test various oversight models (e.g., with and without URO presence) and track changes in assumption decay indicators.
  • Agent-Based Modeling: Simulate institutional actors with different cognitive abilities and AI exposure levels to analyze how changes in disruption pressure or resilience resources impact systemic expiration.

6.3. Expiration Velocity

Expiration velocity refers to the rate at which assumptions weaken under the pressure of AI. Knowing its dynamics makes the AI Pressure Clock more effective.
  • Acceleration Factors
    Rapid deployment of opaque AI systems without human oversight.
    High-stakes decision environments, like emergency response, have amplified costs due to misinterpretations.
    Rigid organizational cultures that lack feedback and learning mechanisms.
  • Deceleration Factors
    Strong interpretive frameworks, such as algorithmic impact assessments and URO oversight.
    Gradual AI integration with repeated sandbox testing.
    Rotational oversight panels for distributed accountability.
  • Guiding Questions
    • Thresholds: What percentage drop in ADI metrics (e.g., a 25% decrease in explainability scores) causes a quadrant shift?
    • Tipping Points: Do specific events, like implementing a new AI module, significantly speed up assumption decay?
    • Intervention Lags: How long after implementing oversight reforms (URO, sandbox) do we see measurable slowdowns in ADI trends?
    • Contextual Moderators: What institutional factors (sector, size, regulatory environment) most influence expiration velocity?
By clearly outlining our theoretical contributions, organizing research directions, and defining the operational concept of expiration velocity, this section offers both the conceptual insights and empirical guidance necessary to develop Expiration Theory and its diagnostic tools.

6.4. Bridging Institutional Theory and AI Ethics

This paper connects AI ethics with institutional design. While much of the AI ethics literature focuses on technical issues like bias and fairness at the model level (Floridi & Cowls, 2019), many ethical challenges arise from institutions that are not equipped to interpret, govern, or take responsibility for AI systems.
Implication:
  • AI ethics should include institutional epistemology, focusing on how organizations acquire knowledge, make decisions, and justify actions in AI-driven settings.
  • Shift from model-level interventions to system-level reforms by embedding AI in new governance structures.

6.5. Contributions to Leadership Theory

The leadership literature highlights the importance of transformation, adaptability, and ethical responsibility (Avolio et al., 2009). However, Expiration Theory suggests that some leadership models, particularly those based on exclusive human authority and hierarchical control, are becoming outdated.
Implications of the Expiration Theory:
  • It calls for post-human leadership models integrating collaborative intelligence and algorithmic co-governance).
  • It opens an inquiry into AI-augmented moral authority, distributed responsibility, and machine-mediated influence in decision-making.

6.6. Interdisciplinary Research Agenda

Expiration Theory invites scholars from various domains to study:
  • Different sectors face varying speeds of epistemic stress from AI.
  • Institutional safeguards can help prevent premature expiration.
  • It is important to measure assumption decay in both normative and procedural systems.
  • Hybrid institutions are emerging to promote human-AI collaboration.
Potential Disciplines Involved:
  • Organizational studies.
  • AI ethics and governance.
  • Public administration and policy.
  • Law and jurisprudence.
  • Education, finance, and health system studies.
  • STS (Science, Technology, and Society).
The Expiration Theory shifts our view of AI disruption from replacement to an issue of how well institutions adapt and maintain their legitimacy. The AI Pressure Clock serves as a key framework for identifying where knowledge is at risk of becoming outdated and which interventions are most urgently needed.

7. Policy and Practice Implications

The Expiration Theory is a conceptual framework with practical implications for policymakers, regulators, executives, and institutional designers. As AI drives significant changes across various sectors, the focus should be on both adapting technology and realigning institutions to meet these changes. This section explains how the AI Pressure Clock can guide strategy, oversight, and reform in a practical manner.

7.1. Early-Warning Systems for Institutional Obsolescence

In an AI-driven world, it is crucial to develop early-warning systems that can detect when institutions are becoming obsolete in their understanding and practices. Many organizations lack the necessary tools to keep pace with the rapid changes brought on by AI. The AI Pressure Clock provides a framework that can be incorporated into institutional risk assessments. Governments and regulatory bodies should establish AI observatories or cross-sector monitoring units to track the integration of algorithms and identify resilience gaps. These entities can conduct regular audits to evaluate not only digital maturity but also deeper issues, such as declining assumptions, loss of legitimacy, and governance delays. Early detection of these vulnerabilities enables proactive reforms, allowing institutions to adapt before they reach a point of irrelevance.

7.2. Prioritized Institutional Reform

Not all institutions can be reformed simultaneously due to limited time and resources. The AI Pressure Clock identifies reform priorities:
  • Institutions such as justice systems and electoral bodies, which are significantly impacted by AI and lack resilience, require immediate foundational reform.
  • These institutions should enhance their adaptive capacity with ethical oversight, improved governance, and cross-disciplinary collaboration.
  • While stable, these institutions should modernize proactively to prepare for future disruptions.
  • Institutions in this quadrant require minimal updates.
This quadrant-based framework helps policymakers create targeted and context-specific reform strategies.

7.3. Embedding AI Accountability in Governance Design

As algorithmic systems assume decision-making roles, establishing accountability for errors and ethical lapses becomes increasingly challenging. To address this, institutions should adopt AI accountability mechanisms from the outset. According to the EU’s GDPR, organizations must appoint a Data Protection Officer (DPO) responsible for managing personal data—essentially, a designated individual in charge of data practices. Additionally, major tech companies, such as Microsoft, have established senior “AI ethics officer” positions, the holders being responsible for approving high-risk projects.
One effective solution is appointing an Ultimate Responsibility Owner (URO) (Frimpong, 2025a)—a human authority responsible for overseeing the ethical and legal aspects of AI-driven decisions. Unlike technical or compliance staff, the URO focuses on interpretive judgment, human override, and systemic accountability. In the public sector, this involves algorithmic impact assessments and regulatory reviews, while in the private sector, it is addressed through AI ethics committees or audit panels.
While the URO role is straightforward in regulated environments, such as finance and healthcare, it may be more challenging in resource-limited sectors, including local governments and humanitarian organizations. In these cases, alternatives like rotational oversight committees may help achieve accountability without overhauling existing structures.
Implementing the URO framework in various contexts ensures accountability is human-centered, maintains institutional legitimacy, and reinforces ethical governance in an AI-driven world.

7.4. Workforce and Leadership Retraining

Institutions require secure environments to test and refine their governance before implementing AI. Institutional sandboxes, adapted from fintech regulatory models, provide a framework for controlled experimentation with AI tools in real-world settings. These sandboxes help identify risks, such as assumption decay and interpretive mismatches, before they impact high-stakes decisions. They also encourage collaboration among ethicists, technologists, and institutional leaders to assess the implications of AI use.
However, the feasibility of establishing sandboxes varies across sectors. Digitally advanced institutions, such as central banks and universities, can easily integrate them. In contrast, public services and humanitarian agencies may struggle to access the necessary resources and technical expertise to maintain effective sandboxes.
To address this, regional alliances, public–private partnerships, or donor-supported AI governance hubs can provide shared infrastructure and reduce burdens. The aim is to tailor governance experimentation to sector-specific needs while preventing the hurried or unethical integration of AI. Institutional sandboxes enable organizations to adopt AI responsibly, thereby reducing the risk of disrupting their core operations.

7.5. Institutional Innovation Infrastructure

Expiration Theory suggests that some outdated institutions cannot be fixed and need to be completely reimagined. This requires significant investment in new institutional innovation, creating experimental governance frameworks that incorporate AI from the start. Governments and multilateral agencies should establish “institutional sandboxes” to test alternative governance models, such as algorithm-based judicial systems and AI-supported policy planning. These zones can help develop new accountability, transparency, and participatory governance methods. Additionally, philanthropic and academic groups should establish design labs where technologists, ethicists, social scientists, and communities collaborate on governance solutions. Without this infrastructure, we risk applying outdated institutional models to modern problems, which will not solve the issues and may even worsen them.

7.6. Summary: From Awareness to Action

Table 3 summarizes the paper’s recommendations in a straightforward sequence, guiding actions from diagnosing expiration risk to implementing strategies. It connects each goal, such as “Detect Expiration Early,” with specific measures, including resilience audits and quadrant mapping, transforming insights into a practical roadmap. This summary helps prioritize efforts by establishing early-warning indicators, aligning reforms based on quadrant severity, assigning accountability, enhancing cognitive capacity for interpreting AI impacts, and investing in long-term institutional innovation. This clear progression facilitates the shift from recognizing epistemic risks to enacting tangible organizational change.

8. Limitations

While the Expiration Theory and the AI Pressure Clock provide a fresh perspective on institutional vulnerability in the AI era, they also have certain limitations that need to be mentioned:
  • Conceptual Focus
    This work is theoretical, synthesizing the existing literature to introduce new concepts, such as assumption decay and the AI Pressure Clock. However, it lacks empirical validation. Future research should operationalize and test these ideas in real-world scenarios.
  • Contextual Variability
    Institutional responses to AI vary based on cultural, political, and economic factors. Governance maturity, regulatory frameworks, and available resources influence how organizations understand and implement the AI Pressure Clock. Our framework is adaptable, but it requires adjustments to fit local conditions.
  • Risk of Oversimplification
    Simplifying complex institutional dynamics into four quadrants or criteria overlooks important nuances. This model should be used as a helpful guide, rather than a strict classification.
  • Future Refinements
We anticipate expanding this work through the following:
  • Empirically validate the Assumption Decay Index using case studies and survey research.
  • Investigating how the AI Pressure Clock can be adapted to specific sectors and regions.
  • Incorporating stakeholder perspectives to fine-tune interpretive thresholds and enhance practical guidance.
  • Intervention trials, such as URO appointments or institutional sandbox pilots, to assess practical effectiveness and refine governance prototypes.
  • We highlight these limitations to strengthen the framework’s academic rigor and inform future empirical and contextual improvements.

9. Conclusions: Rethinking Institutional Viability in the Age of AI

This paper presents the Expiration Theory, a framework that shifts institutional analysis from performance disruptions and adaptive resilience to the decline of fundamental assumptions resulting from the advent of artificial intelligence. It identifies four core mechanisms of institutional expiry: assumption obsolescence, structural mismatch, loss of trust, and inability to interpret change. Expiration Theory builds on existing disruption and legitimacy models. The AI Pressure Clock is introduced as a diagnostic tool to evaluate institutions based on AI disruption and epistemic resilience, indicating when more profound cognitive realignment is needed beyond mere adaptation.
The key takeaways for the three audiences are as follows:
  • Researchers: Expiration Theory focuses on institutional change by highlighting the importance of cognitive alignment and epistemic validity, in addition to structural dynamics.
  • Policymakers: The framework emphasizes the need for human accountability and safeguards in AI oversight, utilizing roles such as the Ultimate Responsibility Owner and sandbox environments, rather than relying solely on performance metrics.
  • Institutional leaders and designers: To maintain relevance and consistency in an AI-driven world, institutions should focus on early detection of assumption decay, conduct specific resilience audits, and implement targeted interventions based on the severity of the situation.
This conclusion highlights the importance of maintaining institutional viability in the era of AI. Achieving this requires not only the adoption of technology but also a sustained focus on the cognitive foundations that support legitimacy and informed decision-making.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Acemoğlu, D., Egorov, E., & Sonin, K. (2021). Institutional change and institutional persistence. Elsevier EBooks, 365–389. [Google Scholar] [CrossRef]
  2. Avolio, B. J., Walumbwa, F. O., & Weber, T. J. (2009). Leadership: Current theories, research, and future directions. Annual Review of Psychology, 60(1), 421–449. [Google Scholar] [CrossRef] [PubMed]
  3. Baumgartner, F. R., & Jones, B. D. (2010). Agendas and instability in American politics (2nd ed.). University of Chicago Press. [Google Scholar]
  4. Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April 21–26). It’s reducing a human being to a percentage. 2018 CHI Conference on Human Factors in Computing Systems—CHI’18 (pp. 1–14), Montreal QC, Canada. [Google Scholar] [CrossRef]
  5. Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company. [Google Scholar]
  6. Centre for Eye Research Australia. (2023). AI in healthcare series workshop: 11-applications of AI: Experiences from Aravind eye care system. YouTube. Available online: https://www.youtube.com/watch?v=wrf3bg5hwEg (accessed on 30 June 2025).
  7. Chemma, N. (2021). Disruptive innovation in a dynamic environment: A winning strategy? An illustration through the analysis of the yoghurt industry in Algeria. Journal of Innovation and Entrepreneurship, 10(1), 34. [Google Scholar] [CrossRef]
  8. Choi, T., & Park, S. (2021). Theory building via agent-based modeling in public administration research: Vindications and limitations. International Journal of Public Sector Management, 34(6), 614–629. [Google Scholar] [CrossRef]
  9. Christensen, C. M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Harvard Business School Press. [Google Scholar]
  10. Clapp, M. (2020). Assessing the efficacy of an institutional effectiveness unit. Assessment Update, 32(3), 6–13. [Google Scholar] [CrossRef]
  11. Coppi, G., Moreno Jimenez, R., & Kyriazi, S. (2021). Explicability of humanitarian AI: A matter of principles. Journal of International Humanitarian Action, 6(1). [Google Scholar] [CrossRef]
  12. Coşkun, R., & Arslan, S. (2024). The role of organizational language in gaining legitimacy from the perspective of new institutional theory. Journal of Management & Organization, 30(6). [Google Scholar] [CrossRef]
  13. Danaher, J. (2019). Automation and utopia: Human flourishing in a world without work (pp. 87–131). Harvard University Press. [Google Scholar]
  14. de Vaujany, F.-X., Vaast, E., Clegg, S. R., & Aroles, J. (2020). Organizational memorialization: Spatial history and legitimation as chiasms. Qualitative Research in Organizations and Management: An International Journal, 16(1), 76–97. [Google Scholar] [CrossRef]
  15. DiMaggio, P. J., & Powell, W. W. (1983). The Iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160. [Google Scholar] [CrossRef]
  16. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv, arXiv:1702.08608. [Google Scholar]
  17. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. [Google Scholar] [CrossRef] [PubMed]
  18. EU. (2024). The Act. The artificial intelligence act. Available online: https://artificialintelligenceact.eu/the-act/ (accessed on 30 June 2025).
  19. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. [Google Scholar]
  20. Feng, L., Qin, G., Wang, J., & Zhang, K. (2022). Disruptive innovation path of start-ups in the digital context: The perspective of dynamic capabilities. Sustainability, 14(19), 12839. [Google Scholar] [CrossRef]
  21. Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). [Google Scholar] [CrossRef]
  22. Frimpong, V. (2025a). Artificial Intelligence on trial: Who is responsible when systems fail? Toward a framework for the ultimate AI accountability owner. Preprints. [Google Scholar] [CrossRef]
  23. Frimpong, V. (2025b). The strategic risk of delayed and non-adoption of artificial intelligence: Evidence from organizational decline in extinction zones. International Journal of Business and Management, 20(4), 26. [Google Scholar] [CrossRef]
  24. Gidley, D. (2020). Creating institutional disruption: An alternative method to study institutions. Journal of Organizational Change Management. ahead-of-print. [Google Scholar] [CrossRef]
  25. Grint, K. (2008). Leadership, management and command: Rethinking D-day. Palgrave/Macmillan. [Google Scholar]
  26. Guo, J., Shi, M., Peng, Q., & Zhang, J. (2023). Ex-ante project management for disruptive product innovation: A review. Journal of Project Management, 8(1), 57–66. [Google Scholar] [CrossRef]
  27. Hacker, J. S. (2004). Privatizing risk without privatizing the welfare state: The hidden politics of social policy retrenchment in the united states. American Political Science Review, 98(2), 243–260. Available online: https://EconPapers.repec.org/RePEc:cup:apsrev:v:98:y:2004:i:02:p:243-260_00 (accessed on 2 July 2025). [CrossRef]
  28. He, K., & Feng, H. (2019). Leadership transition and global governance: Role conception, institutional balancing, and the AIIB. The Chinese Journal of International Politics, 12(2), 153–178. [Google Scholar] [CrossRef]
  29. ISACA. (2024). How to build digital trust in AI with a robust AI governance framework. Empowering careers. Advancing trust in technology. Available online: https://www.isaca.org/resources/isaca-journal/issues/2024/volume-3/how-to-build-digital-trust-in-ai-with-a-robust-ai-governance-framework (accessed on 2 July 2025).
  30. Jahan, I., Bologna Pavlik, J., & Williams, R. B. (2020). Is the devil in the shadow? The effect of institutional quality on income. Review of Development Economics, 24(4), 1463–1483. [Google Scholar] [CrossRef]
  31. Khanna, T., & Palepu, K. G. (2010). Winning in emerging markets: A road map for strategy and execution (pp. 13–26). Harvard Business Press. [Google Scholar]
  32. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017). Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237–293. [Google Scholar] [CrossRef]
  33. Krajnović, A. (2020). Institutional distance. Journal of Corporate Governance, Insurance and Risk Management, 7(1), 15–24. [Google Scholar] [CrossRef]
  34. Mahoney, J. (2000). Path dependence in historical sociology. Theory and Society, 29(4), 507–548. [Google Scholar] [CrossRef]
  35. Ménard, C. (2022). Disentangling institutions: A challenge. Agricultural and Food Economics, 10(1), 16. [Google Scholar] [CrossRef]
  36. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. [Google Scholar] [CrossRef]
  37. OECD. (2024a). AI principles. OECD. Available online: https://www.oecd.org/en/topics/sub-issues/ai-principles.html (accessed on 30 June 2025).
  38. OECD. (2024b). Model AI governance framework for generative AI. Available online: https://oecd.ai/en/catalogue/tools/model-ai-governance-framework-for-generative-ai (accessed on 2 July 2025).
  39. Oesterling, N., Ambrose, G., & Kim, J. (2024). Understanding the emergence of computational institutional addedscience: A review of computational modeling of institutions and institutional dynamics. International Journal of the Commons, 18(1). [Google Scholar] [CrossRef]
  40. Oliver, C. (1991). Strategic responses to institutional processes. Academy of Management Review, 16(1), 145–179. [Google Scholar] [CrossRef]
  41. Özer, M. Y. (2022). Informal sector and institutions. Theoretical and Practical Research in the Economic Fields, 13(2), 180. [Google Scholar] [CrossRef] [PubMed]
  42. Pasquale, F. (2015). Black box society: The secret algorithms that control money and information. Harvard University Press. [Google Scholar]
  43. Petrenko, V. (2025). AI-based credit scoring: Revolutionizing financial assessments. Litslink. Available online: https://litslink.com/blog/ai-based-credit-scoring (accessed on 30 June 2025).
  44. Pieczewski, A. (2023). Poland’s institutional cycles. Remarks on the historical roots of the contemporary institutional matrix. UR Journal of Humanities and Social Sciences, 29(4), 5–21. [Google Scholar] [CrossRef]
  45. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A., … Roberts, M. E. (2019). Machine behaviour. Nature, 568(7753), 477–486. [Google Scholar] [CrossRef]
  46. Roblek, V., Meško, M., Pušavec, F., & Likar, B. (2021). The role and meaning of the digital transformation as a disruptive innovation on small and medium manufacturing enterprises. Frontiers in Psychology, 12, 592528. [Google Scholar] [CrossRef]
  47. Scott, W. R. (2014). Institutions and organizations: Ideas, interests, and identities. Sage Publications, Inc. [Google Scholar]
  48. Siddiki, S., Heikkila, T., Weible, C. M., Pacheco-Vega, R., Carter, D., Curley, C., Deslatte, A., & Bennett, A. (2019). Institutional analysis with the institutional grammar. Policy Studies Journal, 50(2), 315–339. [Google Scholar] [CrossRef]
  49. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., & Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. [Google Scholar] [CrossRef] [PubMed]
  50. Stoltz, D. S., Taylor, M. A., & Lizardo, O. (2019). Functionaries: Institutional theory without institutions. SocArXiv. [Google Scholar] [CrossRef]
  51. Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. Academy of Management Review, 20(3), 571–610. [Google Scholar] [CrossRef]
  52. Sun, Y., & Zhou, Y. (2024). Specialized complementary assets and disruptive innovation: Digital capability and ecosystem embeddedness. Management Decision, 62(11), 3704–3730. [Google Scholar] [CrossRef]
  53. Thelen, K. (2004). How institutions evolve: The political economy of skills in Germany, Britain, the United States, and Japan. Cambridge University Press. [Google Scholar] [CrossRef]
  54. Ungar-Sargon, J. (2025). AI and spirituality: The disturbing implications. Journal of Medical-Clinical Research & Review, 9(3), 1–7. Available online: https://www.scivisionpub.com/pdfs/ai-and-spirituality-the-disturbing-implications-3742.pdf (accessed on 29 June 2025).
  55. Wen, H. (2024). Human-AI collaboration for enhanced safety. In Methods in chemical process safety (Vol. 8, pp. 51–80). Elsevier. [Google Scholar] [CrossRef]
  56. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs. [Google Scholar]
Figure 1. Diagnostic criteria for institutional expiration in the age of AI.
Figure 1. Diagnostic criteria for institutional expiration in the age of AI.
Admsci 15 00263 g001
Figure 2. The AI Pressure Clock framework.
Figure 2. The AI Pressure Clock framework.
Admsci 15 00263 g002
Table 1. Disruption and expiration.
Table 1. Disruption and expiration.
FeatureDisruptionExpiration
MechanismCompetitive replacementEpistemic breakdown
AgentNew market entrantExogenous force (e.g., AI)
FocusEfficiency, price, accessibilityAssumptions, legitimacy, functional alignment
ScopeProduct, service, organizationSystem, institution, model
Adaptation possible?Yes—through innovationNot always—may require structural rethinking
Business (Example)Uber disrupting taxisAlgorithmic hiring expiring HR-led recruitment models
Public Sector (Example)E-learning platforms disrupting in-person trainingPredictive policing systems expiring court-based sentencing frameworks
Table 2. Sectoral expiration summary.
Table 2. Sectoral expiration summary.
SectorQuadrantPrimary RiskReform Priorities
Legal SystemsI—AI-Disrupted & FragileProcedural opacity, loss of fairnessExplainability, AI-adapted legal reasoning
Banking & FinanceI/II—Disrupted & DivergingGovernance mismatch, exclusion risksTransparent AI use, compliance redesign, inclusion tools
HealthcareII—AI-Disrupted but AdaptiveRole confusion, over-reliance on opaque toolsInterdisciplinary retraining, ethical AI oversight
Higher EducationIII—Stable but VulnerableCurricular rigidity, credential obsolescenceAI-integrated learning models, modular reform
Religious InstitutionsIII/IV—Vulnerable/Low ThreatTheological incoherence, AI-led spiritual authorityHuman-led doctrinal boundaries, ethical AI curation
Humanitarian SystemsII—AI-Disrupted but AdaptiveAlgorithmic opacity in life-critical decisionsParticipatory ethics, explainable aid systems
Table 3. Summary: From awareness to action.
Table 3. Summary: From awareness to action.
Strategic GoalRecommended Action
Detect Expiration EarlyAI resilience audits, Pressure Clock quadrant mapping
Target Reform StrategicallyReform pathways by quadrant severity
Clarify Final AccountabilityIntroduce URO roles and human-in-the-loop mandates.
Build Cognitive ReadinessAI-informed leadership retraining and staff development
Invest in Institutional FuturesFund sandbox governance and adaptive regulatory models
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Frimpong, V. When Institutions Cannot Keep up with Artificial Intelligence: Expiration Theory and the Risk of Institutional Invalidation. Adm. Sci. 2025, 15, 263. https://doi.org/10.3390/admsci15070263

AMA Style

Frimpong V. When Institutions Cannot Keep up with Artificial Intelligence: Expiration Theory and the Risk of Institutional Invalidation. Administrative Sciences. 2025; 15(7):263. https://doi.org/10.3390/admsci15070263

Chicago/Turabian Style

Frimpong, Victor. 2025. "When Institutions Cannot Keep up with Artificial Intelligence: Expiration Theory and the Risk of Institutional Invalidation" Administrative Sciences 15, no. 7: 263. https://doi.org/10.3390/admsci15070263

APA Style

Frimpong, V. (2025). When Institutions Cannot Keep up with Artificial Intelligence: Expiration Theory and the Risk of Institutional Invalidation. Administrative Sciences, 15(7), 263. https://doi.org/10.3390/admsci15070263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop