Next Article in Journal
TransGoT: Structured Graph-of-Thoughts Reasoning for Machine Translation with Large Language Models
Previous Article in Journal
An Intelligent Simulation Training System for Power Grid Control and Operations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Generative AI as a General-Purpose Technology: Foundations, Applications, and Labor Market Implications Through 2030

Department of Business Technology, Miami Herbert Business School, University of Miami, Coral Gables, FL 33146, USA
Big Data Cogn. Comput. 2026, 10(3), 69; https://doi.org/10.3390/bdcc10030069
Submission received: 23 January 2026 / Revised: 15 February 2026 / Accepted: 24 February 2026 / Published: 27 February 2026
(This article belongs to the Section Large Language Models and Embodied Intelligence)

Abstract

Generative Artificial Intelligence (AI) has transitioned from a research milestone to a general-purpose technology with wide-ranging implications for organizations, labor markets, and information systems. Thanks to improvements in deep learning, generative adversarial networks (GANs), variational autoencoders (VAEs), diffusion models, transformer-based language models, and reinforcement learning from human feedback (RLHF), generative AI can now create high-quality text, images, audio, code, and other types of content. This review synthesizes the core technical foundations and best practices for training, evaluation, and governance, with an emphasis on scalability and human oversight. The paper examines applications across customer service, marketing, software development, healthcare, finance, law, logistics, and the creative industries, and assesses the labor implications of generative AI using a sociotechnical lens. This study also develops a disruption index that integrates task exposure, adoption rates, time savings, and skill complementarity. The paper concludes with actionable recommendations for policymakers, organizations, and workers, emphasizing the importance of reskilling, algorithmic transparency, and inclusive innovation. Taken together, these contributions situate generative AI within broader debates about automation, augmentation, and the future of work.

1. Introduction

Artificial intelligence (AI) has entered a transformative phase in which models not only classify data but also generate novel content across multiple modalities. Generative AI—a subset of AI that produces text, images, audio, and software code—has rapidly advanced due to breakthroughs in deep learning, unsupervised representation learning, and scalable computing infrastructure. Models such as generative adversarial networks (GANs) [1], variational autoencoders (VAEs) [2], diffusion models [3], and transformer-based large language models [4,5] underpin this technological evolution. Reinforcement learning from human feedback (RLHF) further aligns model outputs with human intent [6,7].
These advances have enabled generative models to perform tasks previously thought to be uniquely human, raising new questions for organizations, labor markets, and governance institutions. The potential for knowledge work automation is particularly salient: industry estimates suggest that up to 30% of U.S. work hours could be automated by 2030, with generative AI accelerating the pace of occupational transitions. However, the implications vary significantly by task type, occupation, and institutional context. While the International Labour Organization (ILO) warns that clerical workers face disproportionate exposure, it also finds that most tasks are only partially automatable [8]. Recent studies further suggest that generative AI can both substitute for and complement human labor, depending on how tasks are structured and what skills are emphasized [9,10]. These findings underscore the importance of nuanced measurement, cross-sectoral analysis, and informed policy responses.
Importantly, generative AI is diffusing at a pace and scale rarely observed in prior waves of general-purpose technology. According to the 2025 AI Index, U.S. private investment in AI reached $109.1 billion in 2024—nearly twelve times higher than China’s $9.3 billion—and accounted for over $33.9 billion in generative AI investment alone, an 18.7% year-over-year increase [11]. Business adoption has surged: 78% of organizations reported using AI in 2024, up from 55% in 2023 [11]. These developments signal that AI has transitioned from experimental novelty to strategic infrastructure, with generative models driving much of the momentum.
National statistics mirror this trend. The OECD’s 2025 ICT survey reveals that AI adoption rates nearly doubled in several countries between 2023 and 2024. In Canada, 9.3% of businesses reported using generative AI in early 2024, with an additional 4.6% planning to adopt it [12]. The United Kingdom’s Office for National Statistics projects a rise in AI adoption from 9% in 2023 to 22% in 2024, while the U.S. Census finds that 8.3% of firms used AI for production tasks as of April 2025 [12]. On the individual level, survey data indicate that 39.4% of Americans had used generative AI by August 2024, with 28% using it at work and 10.6% using it daily [13]. These rates far exceed historical adoption curves: generative AI’s penetration within one year is nearly double that of personal computers three years after the launch of the IBM PC [13].
Such adoption statistics suggest that generative AI is no longer a niche innovation but a pervasive platform technology. It is being embedded into enterprise workflows, software ecosystems, and consumer applications. Yet diffusion remains uneven. OECD data reveal a higher uptake of knowledge-intensive services and larger firms, with Nordic countries reporting AI adoption rates of over 44% in the communications sector, compared to under 10% in manufacturing within Southern Europe [12]. Usage also diverges demographically: men, younger workers, and highly educated professionals are significantly more likely to use generative AI than their peers [13]. Without targeted policy and organizational action, such disparities may deepen existing social and economic inequalities.
At the same time, generative AI does not merely displace—it also complements existing systems. Historical analysis of general-purpose technologies, such as electricity and computing, reveals that productivity gains emerge when innovation is combined with institutional reform and human capital development [14,15,16]. Understanding whether generative AI will follow a similar path or diverge requires a multidisciplinary perspective that combines insights from technology adoption, labor economics, sociotechnical systems theory, and digital governance.
This review responds to that need by synthesizing research from computer science, economics, management, and public policy. It situates generative AI within the long tradition of general-purpose technologies (GPTs) [14], while drawing on the diffusion of innovations framework [17], the technology acceptance model [18], unified models of IT adoption, and sociotechnical systems thinking [19] to contextualize its trajectory. The review integrates national statistics, labor force surveys, and firm-level data to examine adoption patterns and socioeconomic consequences.
This paper contributes an integrative review of generative AI that connects its technical foundations, sectoral applications, and sociotechnical implications. It also proposes an exploratory disruption index that synthesizes existing data into a quasi-predictive indicator of occupational vulnerability under alternative adoption scenarios. The index is offered as a conceptual device rather than a definitive forecast, so its outputs should be read as scenario-based comparisons rather than point predictions.
Specifically, the paper delivers (1) a detailed taxonomy of generative model families, emphasizing their mathematical underpinnings and key design trade-offs; (2) a cross-sectoral synthesis of use cases that surfaces emerging best practices alongside recurring pitfalls; (3) a novel disruption index that integrates task exposure, adoption rates, time savings, and skill complementarity to assess occupational vulnerability; and (4) an empirical assessment of projected generative AI impacts through 2030 using U.S. and global data.
Existing review articles on generative AI often stay within a single lane: some concentrate on model architectures, training, and evaluation; others catalog sector-specific applications; and others discuss automation risk at a high level without a unified link between technical capability, adoption dynamics, and task-level labor exposure. This manuscript is distinguished by integrating these streams through a general-purpose technology lens and translating that integration into a transparent organizing framework. Specifically, it connects (i) the technical foundations and constraints that shape what generative systems can reliably do, (ii) observed cross-sector adoption patterns and governance considerations that shape how they are deployed, and (iii) a task- and occupation-level measurement approach that separates exposure from adoption, time savings, and complementarity. The result is an integrative review plus a scenario-based disruption index with empirical projections through 2030, offering a single, coherent account that existing reviews typically do not provide when they focus on only technology, only use cases, or only labor-market narratives.
The remainder of the paper is organized as follows. Section 2 introduces the historical evolution and technical foundations of generative AI. Section 3 surveys use cases across industries. Section 4 analyzes labor disruption and introduces the disruption index. Section 5 presents empirical estimates of disruption to 2030. Section 6 discusses implications and future research directions. Section 7 concludes.

2. Foundations of Generative AI

This section examines the emergence of generative AI from both historical and technical perspectives. After outlining the evolution and taxonomy of generative models, the paper delves into the mathematical foundations of each class, highlighting their objectives and characteristics. These subsections provide the necessary context for understanding later analysis of use cases and labor impacts.

2.1. Evolution and Taxonomy

Generative AI has roots in probabilistic modeling and unsupervised learning. Early generative approaches used graphical models and Markov chains, but deep generative models have revolutionized the field. Table 1 presents a timeline of key milestones. The introduction of GANs in 2014 ignited intense research on adversarial learning [1]. VAEs provided a variational framework for latent variable models [2]. Denoising diffusion models introduced a stochastic process that adds and removes noise to learn data distributions [3]. Transformers replaced recurrent networks with self-attention, enabling scalable language models [4]. RLHF and instruction-tuning aligned large language models with human preferences [6]. These advances, combined with massive datasets and improved hardware, underpin the development of modern generative AI.
The timeline in Table 1 underscores how rapidly generative AI has evolved. In less than a decade, the field progressed from the first adversarial and variational models to diffusion models and instruction-tuned transformers. This progression reflects the cumulative impact of algorithmic breakthroughs, expanding datasets, and increasingly powerful hardware.

2.2. Generative Models

Generative models aim to learn a probability distribution p θ ( x ) over data x and to sample novel instances. The principal classes of models and their mathematical foundations are summarized below.

2.2.1. Generative Adversarial Networks

GANs consist of a generator G ( z ; θ ) that maps latent noise z p z to data space and a discriminator D ( x ; ϕ ) that distinguishes real from fake samples. The generator and discriminator play a minimax game:
min θ max ϕ V ( D , G ) = E x p data [ log D ( x ) ] + E z p z [ log ( 1 D ( G ( z ) ) ) ] .
At equilibrium, the generator produces samples indistinguishable from real data. GANs yield sharp outputs but suffer from training instability and mode collapse. Subsequent variants (e.g., Wasserstein GAN, conditional GAN, and style-based GAN) address these issues.

2.2.2. Variational Autoencoders

VAEs are latent variable models trained by maximizing the evidence lower bound (ELBO). Let p θ ( x | z ) be a decoder and q ϕ ( z | x ) an encoder approximating the posterior. The ELBO on the log-likelihood is:
log p θ ( x ) E q ϕ ( z | x ) [ log p θ ( x | z ) ] KL ( q ϕ ( z | x ) p ( z ) ) ,
where p ( z ) is a prior. Optimizing the ELBO learns both generative and inference networks. VAEs generate diverse samples but often produce blurry images due to the Gaussian likelihood assumption.

2.2.3. Denoising Diffusion Models

Diffusion models define a forward process that gradually adds Gaussian noise to the data and a reverse process that learns to denoise it. Let q ( x t | x t 1 ) = N ( 1 β t x t 1 , β t I ) for a variance schedule ( β t ) t = 1 T . The joint distribution of the forward process is q ( x 1 : T | x 0 ) = t = 1 T q ( x t | x t 1 ) [3]. The model learns a reverse process p θ ( x t 1 | x t ) parameterized by a neural network and minimizes a variational bound on the negative log-likelihood. Diffusion models can achieve state-of-the-art image and audio generation and are robust to mode collapse.

2.2.4. Transformer-Based Language Models

The transformer architecture introduced a self-attention mechanism defined by
Attention ( Q , K , V ) = softmax Q K d k V ,
where Q, K and V are query, key, and value matrices, and d k is the dimensionality [4]. Transformers enable parallel computation and long-range dependencies. Pre-training large transformers on massive corpora yields LLMs such as BERT [5] and GPT-3 [21]. These models are further fine-tuned via supervised learning, RLHF [6], or instruction tuning.

2.2.5. Reinforcement Learning from Human Feedback (RLHF)

RLHF aligns models with human preferences. A reward model r ψ ( y | x ) is trained from human-labeled comparisons of model outputs. The language model π θ ( y | x ) is fine-tuned using reinforcement learning to maximize the expected reward while staying close to the base model π 0 :
max θ E y π θ ( · | x ) [ r ψ ( y | x ) ] λ KL ( π θ ( · | x ) π 0 ( · | x ) ) ,
where λ balances reward maximization and divergence from the original model [6,7]. RLHF reduces hallucination and toxicity and improves following instructions, but requires high-quality human feedback and raises concerns about bias.

2.3. Best and Bad Practices

Training generative models demands careful practice. Huang et al. [22] emphasizes that unstable GAN training arises from an imbalance between the generator and the discriminator and proposes techniques such as feature matching, minibatch discrimination, and gradient penalty. VAEs benefit from sophisticated priors and decoders (e.g., normalizing flows). Diffusion models require variance schedules and noise prediction architectures; latent diffusion reduces computation by operating in a learned latent space [20]. Large language models need data curation, alignment with RLHF, and safety mechanisms.
Evaluation is equally critical. Microsoft outlines best practices for evaluating generative AI, including defining clear metrics, tailoring evaluation to context, combining quantitative and qualitative measures, and adopting continuous monitoring [23]. NIST’s AI Risk Management Framework and generative AI profile provide governance guidelines and risk mitigations across the lifecycle. Bad practices include overfitting to training data, failing to audit outputs for bias and hallucination, and deploying models without alignment or guardrails. Table 2 summarizes generative model classes, objectives, advantages, and limitations.
As seen, each approach involves trade-offs: GANs produce sharp samples but are unstable to train, VAEs offer tractable likelihoods at the cost of blurry outputs, diffusion models achieve high fidelity yet incur slow sampling, transformers excel at sequence modeling but require immense resources, and RLHF aligns outputs with human intent but depends on costly feedback.
As shown in Table 3, the evaluation of generative models depends strongly on the modality (text versus images) and on whether the goal is to measure predictive fit, perceptual realism, or similarity to reference outputs. For language modeling, perplexity is a probabilistic measure derived from the model’s average negative log-likelihood over a token sequence. Intuitively, it quantifies how “surprised” the model is by the observed data: when the conditional probabilities p θ ( x t x < t ) are consistently high, the exponentiated average loss decreases, yielding a lower perplexity. This makes perplexity a useful intrinsic metric for the quality of next-token prediction. Still, it is less informative about downstream properties such as factuality or usefulness in open-ended generation, where multiple continuations may be acceptable even if they are not the exact reference continuation.
For image generation, Table 3 highlights two widely used distributional metrics: Fréchet Inception Distance (FID) and Inception Score (IS). FID compares the feature-space distributions of real images and generated images, typically using embeddings from an Inception network. The formula combines a mean difference term μ x μ g 2 2 and a covariance alignment term involving Σ x and Σ g . Because it evaluates both central tendency and spread in feature space, FID is often interpreted as jointly capturing realism and diversity; lower values indicate that the generated distribution more closely matches the real distribution. Inception Score, by contrast, emphasizes two complementary aspects: (i) individual samples should yield confident class predictions (low entropy p ( y x ) ), and (ii) the set of generated samples should cover many classes overall (higher entropy marginal p ( y ) ). The KL divergence inside the expectation formalizes this trade-off, and the outer exponential makes higher scores better. Practically, IS can reward visually classifiable samples and diversity across classes, but it does not directly compare to real data and can be sensitive to the classifier used.
Table 3 also includes reference-based metrics used when ground-truth targets exist, such as machine translation, summarization, or constrained text generation. BLEU is an n-gram precision metric, aggregating log-precisions p n across n = 1 , , N with weights w n , then exponentiating to produce a score where higher indicates closer overlap with reference text. Because BLEU emphasizes precision of matched n-grams, it tends to reward outputs that reuse reference phrasing, which is appropriate for translation-like settings but can undervalue legitimate paraphrases in open-ended generation. ROUGE-L, in contrast, relies on the longest common subsequence (LCS) between candidate and reference sequences and computes an F-measure variant that balances recall and precision via β . This design makes ROUGE-L especially common for summarization, where preserving core content (often aligned with recall) is important, while still penalizing overly verbose or irrelevant text. Taken together, the metrics in Table 3 illustrate that no single score fully characterizes “generation quality”; each metric operationalizes quality differently, so appropriate selection and multi-metric reporting are typically necessary.
While Table 3 focuses on performance assessment, Table 4 frames deployment in terms of governance: what can go wrong, why it matters, and what to do about it. The first category, fairness, reflects the risk that model outputs replicate or amplify historical biases embedded in training data. The mitigations listed in Table 4 correspond to standard intervention points: measurement (bias audits), data (diversifying and balancing datasets), and objective design (fairness regularization or constraints). Importantly, these actions are complementary: audits detect issues, data curation reduces biased signals, and regularization can directly shape learned behavior when bias cannot be fully removed from the data [24].
The privacy category in Table 4 addresses the possibility that generative models may inadvertently reproduce sensitive information seen during training. Mitigations emphasize limiting exposure at the source (data minimization and restricting protected data), controlling access (access controls), and reducing memorization risk through differential privacy techniques. In practice, privacy protection is strongest when multiple layers are used together, because operational controls (who can train, who can query, what data are included) and algorithmic controls (privacy-preserving learning) address different parts of the risk surface.
Robustness in Table 4 captures two related failure modes: unreliable content (including hallucinations) and vulnerability to adversarial prompting. The recommended actions align with a defense-in-depth approach: adversarial training to improve model behavior under attack-like inputs, guardrails and content filters to constrain outputs at inference time, and uncertainty estimation with fallbacks to reduce harm when the model is not confident. Conceptually, these measures recognize that robustness is not just a training-time property; runtime controls and system-level design also shape it.
Transparency, as presented in Table 4, is an adoption and accountability issue: opaque model behavior and limited disclosure about data and limitations can erode trust. The mitigations focus on documentation artifacts (model cards and transparency reports), explanation mechanisms, and user feedback channels. These practices do not necessarily improve raw benchmark performance, but they improve interpretability, auditability, and stakeholders’ ability to make informed use decisions.
Finally, the security category in Table 4 emphasizes intentional misuse, such as automated scams, fraud, or misinformation. Recommended mitigations combine monitoring and enforcement (abuse detection, usage policies, verification procedures) with alignment to external requirements (regulatory standards) and escalation pathways (cooperation with law enforcement when appropriate). This category underscores that the risks of generative AI include not only model errors but also the potential for capable systems to be weaponized when access and incentives are misaligned.
Read together, Table 3 and Table 4 connect two sides of responsible generative AI practice: measuring how well a model generates according to task-appropriate criteria, and ensuring that the resulting capability is deployed with controls that address bias, privacy, reliability, transparency, and misuse.

2.4. Additional Generative Model Families

Although GANs, VAEs, diffusion models, and transformers dominate current practice, a rich ecosystem of alternative generative frameworks continues to expand the design space. Normalizing flows (NFs) transform a simple base distribution (e.g., a multivariate Gaussian) into a complex target distribution via a sequence of invertible transformations f 1 , , f L with tractable Jacobians. Let z p 0 ( z ) denote a base random variable and x = f ( z ) with f = f L f 1 ; then the change-of-variables formula gives the exact likelihood:
p θ ( x ) = p 0 f 1 ( x ) = 1 L det f 1 h 1 ,
where h 0 = x and h L = z [25]. Normalizing flows enable the exact computation of log-likelihood and are amenable to both density estimation and sampling. Coupling layers, autoregressive flows, and continuous-time flows (also known as neural ordinary differential equations, or neural ODEs) provide flexible building blocks. Recent work integrates flows with diffusion models, yielding “flow matching” techniques that combine fast sampling with high-quality samples.
Energy-based models (EBMs) define an unnormalized density through an energy function E θ ( x ) and sample by Langevin dynamics or contrastive divergence. EBMs can model multimodal distributions and have been applied to text and image generation, but training is computationally intensive. Hybrid models combine EBMs with score-matching and diffusion processes to address normalization [26].
Generative flow networks (GFlowNets) aim to sample complex discrete structures (e.g., molecules, programs) with probabilities proportional to the rewards defined by reward functions. They train a policy P ( s s ) that flows probability mass from a starting state to terminal states, maximizing throughput while ensuring that the net flow into each nonterminal state equals the net flow out. The resulting distribution approximates a Boltzmann distribution over structures and can be used for discovery tasks. GFlowNets complement diffusion and autoregressive models by handling combinatorial outputs.

2.5. Diffusion and Adoption of Generative AI

The proliferation of generative AI is shaped not only by technical progress but also by socioeconomic diffusion dynamics. The classic diffusion of innovation theory posits that adoption follows an S-shaped curve, with early adopters, the early majority, the late majority, and laggards [17]. In the organizational context, adoption is mediated by complementary assets, skills, and culture [19]. Generative AI reduces integration costs by offering off-the-shelf models and user-friendly interfaces (e.g., chatbots) [27], enabling faster diffusion than previous AI waves.
Mathematically, adoption in occupation j at time t can be modeled by a logistic function
A j , t = K j 1 + exp κ j ( t t 0 , j ) ,
where K j denotes the long-run saturation level, κ j is the adoption rate, and t 0 , j is the inflection point. The derivative A ˙ j , t = κ j A j , t 1 A j , t / K j captures adoption speed, highlighting that diffusion is slow when A j , t is near zero or near saturation. Empirical work estimates κ j from survey data. For example, generative AI use among U.S. workers grew from near zero in late 2022 to 28 % by August 2024 [13]; projecting forward using Equation (6) suggests that adoption could reach 60–70% by 2030 under optimistic scenarios. Adoption heterogeneity across firms and sectors can be captured by varying K j and κ j .
Table 5 summarizes adoption rates across U.S. demographic groups using data from the Real-Time Population Survey conducted in August 2024 [13]. “Adoption at work” refers to the percentage of employed respondents who report using generative AI in their jobs. At the same time, “daily users” indicate those who used generative AI every workday in the prior week. Younger workers and individuals with higher levels of education exhibit higher usage rates, reflecting both access to technology and occupational differences.
The survey results in Table 5 reveal pronounced demographic gradients in generative AI adoption. Younger workers, degree holders, and those in computer-related occupations report much higher workplace and daily use of generative tools than women, older individuals, those without college degrees, and those in blue-collar occupations. These disparities foreshadow unequal benefits from diffusion and inform targeted policy responses.
Table 4 categorizes the principal risks associated with generative AI and summarizes mitigation strategies. The categories—fairness, privacy, robustness, transparency, and security—reflect the socio-technical nature of generative systems, and the recommended actions signal that governance must accompany algorithmic innovation.
The metrics in Table 3 illustrate common quantitative tools for evaluating generative models across modalities. Perplexity and BLEU score relate to language modeling, FID and Inception score evaluate image quality and diversity, while ROUGE-L assesses summarization. Selecting appropriate metrics is essential because each captures different aspects of generative performance.
Table 6 provides a cross-country comparison of business AI adoption, based on official statistics and the 2025 update of the OECD ICT Access and Usage by Businesses database [12]. Values reflect the share of enterprises using any AI technology, with a focus on generative AI adoption where available. For example, 9.3% of Canadian firms utilized generative AI in Q1 2024, with another 4.6% planning to adopt it. In Estonia, generative AI triggered a 2.7-fold increase in adoption rates. These descriptive statistics provide a foundation for the empirical modeling in Section 5.
As shown in Table 6, AI adoption is climbing across all surveyed economies, yet generative AI uptake remains uneven. Canada and Estonia already report double-digit usage rates, whereas adoption in the United States and the U.K. is poised to accelerate. These cross-country differences highlight how national industrial structures and policies mediate the diffusion of innovation.
Underlying these adoption gaps are structural factors such as digital infrastructure, education and training systems, and sectoral composition. Countries with widespread broadband access, high levels of tertiary education, and strong information and communication technology sectors—typical of the Nordic economies—tend to adopt generative AI more rapidly. By contrast, economies with limited digital infrastructure or lower investment in human capital exhibit slower adoption despite similar levels of technology availability. These observations suggest that global inequality in generative AI adoption is not purely technological but reflects broader socio-economic capabilities; policies aimed at expanding digital access and skills development could narrow these divides.
These descriptive tables underscore both the rapid diffusion of generative AI and the persistent divides across demographic lines and national contexts. Such heterogeneity motivates the occupation-specific modeling of adoption and disruption in the subsequent sections.

3. Use Cases Across Sectors

Generative AI is permeating many industries and rapidly changing how products and services are delivered. Its influence spans from customer support to creative design, legal research, software development, and beyond. The deployment of text-, image-, audio-, and code-generation tools has reduced the marginal cost of producing complex content, enabling the development of new business models. At the same time, sectoral adoption varies depending on regulatory constraints, data availability, and skill requirements. Table 7 presents representative applications, benefits, and risks.
Empirical evidence from information systems research illustrates both the promise and challenges of generative AI in practice. For example, a quasi-experimental study of a fast-moving consumer goods manufacturer demonstrated that intelligent image processing–powered shelf monitoring substantially increased product sales and improved retailer compliance [28]. In the healthcare domain, AI-augmented diagnostic systems not only improved accuracy but also induced reflective practice among physicians, prompting deeper engagement with clinical information [29]. These and similar findings from MISQ articles highlight the diverse and domain-specific impacts of generative AI, reinforcing the need to consider context when evaluating use cases.
Table 7 catalogs representative applications across key industries and summarizes the benefits and risks of each. It demonstrates that generative AI provides tangible productivity gains—from automated customer support to accelerated drug discovery—while simultaneously introducing risks such as hallucinations, bias, liability, and deskilling. These trade-offs reinforce the need for careful implementation.
Beyond these sectors, generative AI is increasingly used in public administration, journalism, and scientific publishing. However, adoption varies by country and organization. A 2025 U.S. survey found that 28% of workers used generative AI, with 9% using it daily. Users saved an average of 5.4% of their work hours, totaling 2.2 h per week. Upwork’s platform data suggests that generative AI increased job postings by 2.4% but shifted demand from low-value writing tasks toward high-value data science and analytics.

Extended Sectoral Use Cases

The general-purpose nature of generative AI means that its applications extend beyond the archetypal sectors enumerated above. This subsection provides a deeper examination of sectors where generative AI is already shaping practice or holds substantial promise. Both economic value and adoption patterns are emphasized, drawing on recent surveys and case studies.
  • Manufacturing and Industrial Design
Generative algorithms can search vast design spaces to produce novel components that meet functional constraints while minimizing weight and material usage. In the aerospace and automotive industries, companies employ generative design to create lattice structures and topologically optimized parts; these designs reduce weight by 20–40% and lower fuel consumption, yet they would be intractable to discover manually [30]. Generative AI also synthesizes process plans and optimizes supply chains, predicting equipment failure and scheduling maintenance. Surveys of U.S. enterprise leaders indicate that code copilots and automation tools are the most widely adopted generative AI applications, with roughly 51% of enterprises reporting the use of code assistants and 31% employing support chatbots [31]. Investment in generative AI jumped from $2.3 billion in 2023 to $13.8 billion in 2024, with 72% of decision makers anticipating broader adoption [31]. These numbers indicate that manufacturing firms view AI as a key lever for improving productivity.
  • Agriculture and Food Systems
Generative models are increasingly applied to agriculture. Vision models generate synthetic crop images and simulate plant growth under different climate scenarios to augment limited field data. Language models summarize soil and weather conditions for farmers and produce actionable recommendations on irrigation, fertilization, and pest management. Protein and genome synthesis models are designed to create disease-resistant crops and alternative proteins. Although adoption remains nascent, the potential is considerable: crop yield forecasting, supply chain optimization, and food safety monitoring could collectively add hundreds of billions of dollars in value [32]. Realizing this potential requires open data and farmer training, highlighting the sociotechnical challenges of rural diffusion.
  • Energy and Climate
Beyond immediate environmental costs, generative AI may become part of the solution to climate challenges. Models that learn physical dynamics can generate high-resolution weather forecasts, optimize power grid dispatch, and identify anomalies in sensor networks. In the energy sector, generative design helps develop materials for batteries, solar cells, and catalysts, accelerating the search for sustainable energy technologies [26]. Companies deploy language models as virtual energy advisors, helping consumers choose renewable energy plans and reduce consumption. The International Energy Agency warns, however, that AI-driven electrification will place additional strain on grids unless accompanied by substantial investment in renewables and demand response. Policies must therefore coordinate AI development with climate goals [33,34].
  • Government and Public Services
Public agencies worldwide have begun experimenting with generative AI to streamline service delivery. Chatbots answer citizen inquiries, translate documents, and draft policy summaries in plain language. Lawmakers utilize language models to analyze public comments on proposed regulations, identify key themes, and draft legislative proposals. Courts and regulatory bodies utilize AI to summarize case law and facilitate legal research [31]. While these applications promise efficiency and accessibility, they raise ethical and legal questions about transparency, accountability, and due process [35,36]. Adoption must be paired with robust audit trails, human oversight, and clear boundaries around automated decision making [37].
  • Science and Research
In the life sciences, generative models design molecules and proteins with desired properties, vastly accelerating drug discovery and materials science [38]. GFlowNets and diffusion models sample chemical space to find candidate compounds for clinical trials. Researchers use language models to draft literature reviews, formulate hypotheses, and generate code for data analysis. In astrophysics and climate science, generative models synthesize simulation data, reducing the cost of high-fidelity computations. Scientific applications highlight the role of generative AI as an intellectual partner rather than a replacement for human creativity. Success depends on cross-disciplinary collaboration and an understanding of model limitations [39,40]. As generative AI enters peer review and grant evaluation, concerns about bias, reproducibility, and credit attribution will intensify [26,41].
  • Media and Entertainment
The creative industries were among the first to feel the disruptive effects of generative AI. Artists and designers utilize diffusion models to explore styles, iterate on concepts, and create storyboards [42]. Musicians co-create songs with language models that create lyrics and melodies. Newsrooms use AI to draft headlines, summarize interviews, and write financial reports, while film studios use it to generate visual effects and create voiceovers. These tools democratize content creation, but they also raise questions about copyright and authenticity [43,44]. A responsible path forward must reconcile artistic freedom with fair compensation and intellectual-property protection.
Table 8 summarizes other relevant generative AI use cases across multiple sectors, alongside their estimated economic potential, adoption trends, and critical risks. According to the 2024 Menlo Ventures enterprise survey, over half of manufacturing enterprises have adopted code copilots, and nearly one-third use generative AI for support chatbots [31]. Investment in manufacturing-related generative AI rose from $2.3 billion in 2023 to $13.8 billion in 2024, with 72% of leaders anticipating broader deployment. In contrast, adoption in agriculture and energy remains nascent but carries long-run economic value in the hundreds of billions [26,32,34]. Barriers, including limited data infrastructure, domain-specific risks, and regulatory uncertainty, shape the pace and direction of generative AI diffusion. These sectoral distinctions provide a foundation for subsequent modeling of occupational exposure and labor-market transformation.
The expanded use cases in Table 8 underscore the breadth of generative AI’s economic potential and the heterogeneity of adoption across sectors. Manufacturing leads in deployment, whereas agriculture, energy, government, science, and media remain at earlier stages. Each sector’s risks—from intellectual-property ambiguity to reproducibility concerns—illustrate the diverse governance challenges ahead. Short-term adoption dynamics often involve pilots and narrow implementations within high-value use cases, whereas long-term systemic impacts depend on organizational learning, infrastructure investment, and regulatory stability. Distinguishing between these horizons is essential for realistic planning and policymaking.

4. Labor Disruption and Exposure

Generative AI influences labor through automation and augmentation. Automation substitutes technology for human tasks, while augmentation complements human skills. Empirical findings on exposure and displacement are synthesized below.

4.1. Occupational Exposure

Exposure measures quantify the share of tasks in an occupation that can be automated by AI. Eloundou et al. [45] estimates that roughly 80% of the U.S. workforce has at least 10% of their tasks exposed to large language models, and 19% have 50% or more exposure. When LLM-powered software is included, 47–56% of functions could be automated. The ILO finds that clerical support workers have 24% of tasks with high exposure and 58% with medium exposure; other occupations have only 1–4% with high exposure. Exposure varies by income: 0.4% of employment in low-income countries is highly exposed, compared with 5.5% in high-income countries. The Creative AI Jobs Report predicts that AI could create 97 million jobs by 2025, while displacing 85 million.
Sector-specific evidence reveals heterogeneity. In logistics, managers have more than 90% of tasks susceptible to automation, whereas bus and truck mechanics have zero exposure. In the legal sector, 67% of professionals expect generative AI to have a high impact on their work; yet, lawyers are still needed to review AI-generated content. The U.S. Bureau of Labor Statistics (BLS) projects robust growth of software developers (17.9%) and database administrators (10.8%) despite AI, while claims adjusters face declines ( 4.4 %). A Harvard working paper finds that generative AI decreases job postings by 17% for occupations in the top quartile of automation potential, but increases postings by 22% for occupations that are prone to augmentation.

4.2. Complementarity and Inequality

Generative AI often complements high-skill tasks. On Upwork, the adoption of generative AI increased earnings for data science and analytics by over 8%, whereas writing and translation saw declines of 8–10%. The St. Louis Fed [10] reports that the use of generative AI is associated with greater time savings in computer and mathematical occupations compared to service jobs. Studies note gender and socioeconomic disparities: women are over-represented in clerical work and thus face higher exposure; high-income countries have more jobs susceptible to automation; and logistics managers face greater risk than mechanics. A White House report finds that AI-vulnerable occupations with low performance requirements are growing slowly, whereas AI-exposed high-performance occupations grow faster.

4.3. Disruption Index

To systematically measure generative-AI disruption, this paper proposes a disruption index  DI j , t for occupation j at time t:
DI j , t = α · E j + β · A j , t + γ · S j , t δ · C j .
Here, E j is the task exposure percentage (share of tasks automatable), A j , t is the adoption rate (proportion of firms or workers using generative AI in that occupation at time t), S j , t is the average time savings due to generative AI (percentage of work hours saved), and C j is a complementarity factor capturing the extent to which skills are augmented rather than substituted. Parameters ( α , β , γ , δ ) reflect the relative weight of each component and can be normalized so α + β + γ + δ = 1 . Higher DI indicates greater disruption risk; a negative contribution from C j reduces the index when AI complements human labor.
The exposure and adoption components can be derived from surveys and labor statistics. For example, in 2024, generative AI adoption among U.S. workers was 28%, with 9% using it daily, resulting in an average time savings of 5.4%. Complementarity can be proxied by skill demand growth; occupations with rising job postings despite high exposure (e.g., software developers) have higher C j . This measure is illustrated empirically in the next section.
Adoption rates often follow an S-shaped diffusion curve. As shown before, a simple logistic function can model the proportion of workers adopting generative AI in occupation j at time t:
A j , t = K j 1 + exp κ j ( t t 0 , j ) ,
where K j denotes the saturation level, κ j the adoption rate, and t 0 , j the inflection point. This function captures slow initial uptake, rapid growth as benefits become evident, and eventual saturation as adoption matures.
The choice of weights ( α , β , γ , δ ) in Equation (7) draws on prior work on automation indices in labor economics. Studies by Corvello [46] and Autor [15] highlight task exposure and adoption as primary drivers of displacement risk, whereas time savings and complementarity moderate that risk. Consistent with this literature, exposure and adoption are assigned higher weights (e.g., α = 0.4 and β = 0.3 ), time savings receives a moderate weight ( γ = 0.2 ), and complementarity a smaller weight ( δ = 0.1 ). Sensitivity analysis demonstrates that the relative ranking of occupations by disruption risk remains stable under alternative weight vectors.
Operationalizing the complementarity factor C j requires proxies because direct measures of human–AI synergy are scarce. Occupations that experience employment and wage growth despite high exposure (e.g., software developers and data scientists) are assigned higher C j values. In contrast, those with declining demand (e.g., clerical support staff) receive lower scores. Although approximate, this proxy captures whether generative AI primarily substitutes for or complements human labor [15,47]. Future work should refine these measures using detailed task-level data.
The parameters of the logistic adoption curve ( K j , κ j , t 0 , j ) are calibrated using observed adoption rates from industry surveys and national statistics. For each occupation j, K j is set to its long-run adoption potential (e.g., 0.8 for software developers and 0.3 for clerical workers). At the same time, κ j and t 0 , j are estimated by fitting the logistic function to adoption data from 2023–2025. This calibration anchors adoption trajectories in empirical diffusion patterns and enables sensitivity checks with alternative parameter values.

4.4. Skill-Biased Technical Change and Occupational Dynamics

The disruption index captures exposure, adoption, time savings, and complementarity at a given point in time; however, understanding the deeper forces that shape labor outcomes requires engaging with theories of technological change. The literature on skill-biased technical change (SBTC) argues that new technologies disproportionately increase the productivity of skilled workers relative to unskilled workers, thereby raising wage inequality [15,48]. Early waves of computerization automated routine manual tasks, creating a “hollowing out” of middle-skill employment [16,46]. Generative AI extends automation into the cognitive domain, affecting tasks involving language, analysis, and design. At the same time, it acts as a complement to advanced cognitive skills: software developers using code assistants become more productive, data scientists can focus on higher-order modeling, and managers can allocate more time to strategic decision making [47,49,50].
An occupation’s vulnerability is modeled not merely as exposure to automation but as a function of task characteristics and the potential for upskilling. Let RTI j denote the routine task intensity of occupation j—the share of tasks that are routine and thus more easily codified. Following Autor [15], RTI j can be computed as RTI j = k routine w j , k , where w j , k are task weights. Define a vulnerability index
VI j = E j × ( 1 C j ) 1 + σ j ,
where E j is exposure as in Equation (7), C j is complementarity, and σ j captures the availability of skill upgrading pathways (e.g., training programs, transferable skills) for workers in occupation j. High routine intensity, high exposure, and low complementarity produce large VI j signals, indicating a high risk of displacement. Conversely, occupations with substantial upskilling opportunities and complementary tasks can absorb technological shocks more effectively. Researchers estimate that the share of work activities that current technologies can automate has risen from 50% to 60–70%, owing to generative AI’s improved natural language understanding [51]. However, SBTC predicts that demand for complex problem-solving and social-emotional skills will persist [52]. Policymakers and educators must therefore foster lifelong learning and emphasize skills that machines are less likely to replicate.

4.5. Global Inequality and Demographic Heterogeneity

Automation does not affect workers uniformly across countries, income groups, or demographic categories. The International Labor Organization’s 2025 update finds that only 0.4% of employment in low-income countries is highly exposed to generative AI, compared with 5.5% in high-income countries; clerical support workers face 24% of tasks in the high exposure category, while most occupations have fewer than 4% of tasks highly exposed [53]. Across the OECD, roughly one-quarter of workers are exposed to generative AI to some degree, but only about 1% are highly exposed; this share could rise to 70% as generative AI diffuses [12]. These cross-country disparities reflect differences in industrial composition, digital infrastructure, and educational attainment [54]. High-income economies house knowledge-intensive sectors and thus face greater exposure, but also greater opportunities for growth and development. Low-income countries may be shielded from immediate automation due to the prevalence of manual labor; however, they risk being left behind if they lack the capabilities to adopt AI effectively.
Demographic heterogeneity further modulates impact. Real-time survey data show that men are approximately nine percentage points more likely than women to use generative AI at work, that adoption declines sharply with age, and that workers with a college degree are twice as likely to adopt it as those without [13]. Occupations dominated by women, such as clerical and administrative support, have higher exposure levels, whereas male-dominated technical occupations exhibit higher adoption and greater time savings [13]. These disparities compound existing gender and racial inequalities [37,43]. To quantify heterogeneity, consider group-specific adoption functions A j , g , t with saturation levels K j , g and rates κ j , g . Heterogeneity in ( K j , g , κ j , g ) yields different adoption trajectories and thus different disruption timelines. Equity-oriented policy measures—such as targeted training for underrepresented groups, affordable access to AI infrastructure, and inclusive governance—can help mitigate these gaps.
The disruption index and vulnerability index provide quantitative lenses, but they must be interpreted in the context of these socioeconomic backdrops. The following section utilizes available data to estimate disruptions across occupations and time, illustrating how exposure, adoption, time savings, and complementarity interact with demographic factors to shape the future of work.

5. Empirical Evaluation of Disruption to 2030

Quantifying labor disruption requires combining survey data, occupational task analyses, and adoption forecasts. In this section, the disruption index for selected occupations is estimated using available exposure, adoption, time-savings, and complementarity measures and projected to 2030. The approach synthesizes evidence from the literature cited throughout the paper and calibrates the index weights to reflect the relative importance of automation exposure versus augmentation. Weights are set to ( α , β , γ , δ ) = ( 0.4 , 0.3 , 0.2 , 0.1 ) to emphasize exposure and adoption, although sensitivity analysis yields similar rankings. Table 9 and Table 10 report occupation-level exposure, adoption, and time-savings estimates for 2025 and 2030. The 2025 metrics are based on the OpenAI exposure framework [45], the ILO’s task-level occupational mappings [8], survey data from the St. Louis Fed, and productivity experiments from Upwork. Occupations differ widely in their exposure to generative AI and in adoption rates to date. For instance, software developers report a 35% adoption rate but show low overall exposure, while paralegals, clerical support staff, and logistics managers combine high exposure with moderate adoption rates.
The 2030 projections assume greater diffusion driven by cost savings, productivity incentives, and regulatory adaptation. Disruption Index (DI) values are calculated using Equation (7), which incorporates adjustments for exposure, adoption, time savings, and complementarity. Higher complementarity reduces the disruption score, as human-AI collaboration becomes more feasible. The index suggests that occupations with both high exposure and scalable task automation, such as logistics managers and clerical workers, may face the most significant disruption by 2030.
Table 9 summarizes the estimated exposure, adoption, and time savings by occupation in 2025. Software developers and data scientists exhibit modest exposure and high adoption, while clerical support workers, claims adjusters, and logistics managers combine high exposure with low complementarity. Bus and truck mechanics remain largely unaffected due to the manual nature of their tasks. These baseline metrics inform the subsequent projections.
Looking ahead to 2030, Table 10 projects exposure, adoption, and the resulting disruption index by occupation. High scores for logistics managers, clerical workers, and claims adjusters indicate significant displacement risk. In contrast, software developers and data scientists maintain lower indices due to strong complementarity and higher adoption ceilings. These projections emphasize that disruption is uneven across occupations and contingent on adoption dynamics.
The index suggests high disruption risk for logistics managers, clerical workers, and claims adjusters, reflecting high exposure and increasing adoption. Software developers face a moderate risk because exposure is lower and complementarity is strong; nonetheless, adoption is increasing rapidly. Paralegals and support staff may experience significant displacement as generative AI automates document drafting and discovery. Bus and truck mechanics remain largely insulated because their tasks are manual. These estimates align with BLS projections of declining employment for claims adjusters and moderate growth for personal financial advisors, despite the rise of robo-advisors.
To validate the index, DI rankings are compared with employment trends reported by the BLS and Upwork. Occupations with high DI, such as clerical support and claims adjusters, show slower job growth and wage pressure. Occupations with lower DI, like software development and data science, experience strong demand and wage premiums despite automation [13]. These findings indicate that the disruption index captures both risk and opportunity.

6. Discussion and Implications

This section interprets the empirical evidence presented above in light of broader institutional, technological, and ethical considerations. While the preceding analysis quantified occupational exposure and disruption under various adoption scenarios, the implications of generative AI extend beyond labor economics. They encompass questions of public policy, organizational transformation, environmental externalities, social fairness, and research governance. What follows is a discussion of each domain, with a particular focus on implications for scholars, firms, regulators, and infrastructure providers.

6.1. Policy and Regulation

Public policy must strike a balance between innovation and social protection. Regulatory frameworks such as the NIST AI Risk Management Framework and its generative AI profile provide conceptual tools for risk mapping, measurement, and mitigation. Governments should prioritize the following. The earlier-developed disruption index helps identify occupations and sectors where these interventions are most urgent.
  • Transparency and auditability, particularly in high-risk sectors such as healthcare, law, and finance.
  • Human oversight mandates, ensuring accountability for model-assisted decision-making.
  • Reskilling and transition support, as projected occupational shifts (e.g., 12 million transitions in the U.S. by 2030) will require large-scale investment in education, AI literacy, and creativity training [55].
  • Portable benefits and wage insurance, which can mitigate income volatility and support mobility during labor reallocation.
These policies should be complemented by regulatory experimentation (e.g., sandboxes, adaptive rulemaking) to support innovation in governance itself.

6.2. Organizational Strategies

Within firms, adopting generative AI requires more than procurement or technical integration. It requires redesigning workflows, adjusting incentives, and supporting human–machine collaboration [56]. Best practices drawn from case evidence and guidance (e.g., Microsoft’s Responsible AI toolkit) include:
  • Aligning AI deployment with business strategy, not simply substituting human labor.
  • Embedding human oversight and feedback loops, particularly where AI systems operate in open-ended, generative domains.
  • Monitoring performance and drift, using both quantitative and qualitative criteria.
  • Investing in employee retraining and change management, enabling augmentation rather than redundancy.
Open innovation ecosystems and consortia (e.g., model-sharing hubs) can facilitate safer and faster adoption—especially in smaller firms lacking internal R&D capabilities.

6.3. Environmental Sustainability and Energy Policy

The environmental footprint of generative AI is nontrivial. Electricity consumption for inference now rivals training in magnitude. By 2026, global data center electricity use is expected to exceed 1000 TWh [53], driven by generative AI inference.
Policymakers should therefore:
  • Mandate reporting of energy and water consumption for both training and inference workloads.
  • Create incentives for colocation with renewables and investment in energy-efficient cooling infrastructure.
  • Establish an AI Energy Star efficiency rating program for large-scale models.
  • Price externalities via carbon taxes or energy-use quotas, pushing research toward more efficient architectures (e.g., sparse models, quantization, model reuse).
Demand-side adjustments—such as digital sobriety campaigns and energy-aware scheduling—can further reduce unnecessary compute demand and align system behavior with sustainability targets [33,57].

6.4. Fairness, Ethics, and Governance

Generative AI systems embed structural risks related to fairness, explainability, and algorithmic accountability [58]. Unlike traditional classifiers, they produce open-ended outputs (e.g., text, audio, image) that may reflect biases embedded in training corpora. Prior work has shown, for example, that word embeddings encode gender stereotypes (e.g., “man”—“programmer”, “woman”—“homemaker”) [44], and that LLMs can reproduce historical inequities [37,43].
To assess such biases, researchers can adapt fairness metrics to the domain of generative models. One illustrative metric is demographic parity:
DP = Pr ( Y ^ = 1 S = 1 ) Pr ( Y ^ = 1 S = 0 ) ,
where S denotes a sensitive attribute and Y ^ indicates that a generated output satisfies a desirable criterion (e.g., accurate or inclusive representation). More advanced metrics include equalized odds and predictive parity, which compare the distributions of generated outputs conditional on inputs.
Beyond measurement, deployment acceptance hinges on user perceptions. Studies show that users’ initial algorithm aversion may diminish with repeated exposure and performance visibility [59]. Interpretable AI methods such as ROLEX (robust local explanations) have been shown to improve trust among professionals in high-stakes domains [60].
Governance frameworks are emerging but fragmented. The NIST AI RMF promotes transparency, fairness, and lifecycle-oriented risk management. Major firms have developed internal standards (e.g., Microsoft’s Responsible AI Standard, Google’s AI Principles). Regulatory efforts such as the EU AI Act aim to institutionalize safeguards for high-risk systems.
Effective governance must:
  • Require algorithmic impact assessments, human oversight mechanisms, and opt-out procedures.
  • Promote multidisciplinary governance bodies, including ethicists, legal scholars, technologists, and affected communities.
  • Evolve iteratively alongside technical progress to maintain relevance without impeding responsible experimentation.
As Miškufová et al. [61] argued, the duality of generative AI—its power to enhance productivity and its potential to compromise academic and professional standards—requires that governance evolve alongside innovation, not behind it.

6.5. Research Agenda for Generative AI

Finally, the empirical framework suggests a structured agenda for future research. First, longitudinal and causal studies are needed to quantify the impact of generative AI on wages, productivity, and inequality across various sectors and contexts. Second, disruption metrics, such as the Disruption Index, can be refined with more granular task-level data on complementarity, time savings, and adoption lags.
Third, cross-country institutional studies should examine how labor laws, education systems, and digital infrastructures mediate the adoption of generative AI. These studies should include low- and middle-income countries, where the adoption of generative AI may follow different diffusion pathways.
Fourth, legal and normative questions—such as intellectual property in generated outputs, authorship attribution, and liability for harm—require interdisciplinary collaboration among IS scholars, legal theorists, and policymakers.
Ultimately, the information systems research community should establish benchmarks, data resources, and methodological standards for investigating generative AI in organizational and societal contexts.
To consolidate the implications discussed above, Table 11 provides a stakeholder-specific summary of recommended actions and considerations for responsible governance and adoption of generative AI.
Table 11 distills the discussion into concrete actions for different stakeholder groups. Policymakers must balance innovation and protection; firms must integrate AI responsibly and invest in worker retraining; researchers should refine measurement and address normative questions; technology providers should prioritize energy efficiency and interpretability; and end users should cultivate AI literacy and demand transparency.

6.6. Limitations and Uncertainty

The analysis and projections presented here should be interpreted in light of several important limitations. First, the disruption index relies on exposure, adoption, time-savings, and complementarity measures drawn from surveys and secondary data; these measures may be imprecise and may not fully capture within-occupation variation in tasks or skills. Second, the weights assigned to index components are normative and, although sensitivity checks indicate robustness, different weighting schemes could yield alternative rankings. Third, the adoption trajectories and time-savings estimates are calibrated to early diffusion patterns from 2023 to 2025. Unforeseen technological breakthroughs, regulatory interventions, economic shocks, or cultural backlash could accelerate or delay adoption in ways not captured by a logistic curve. Fourth, data sources vary in scope and methodology (national surveys, firm reports, expert assessments), and comparability across countries and sectors may be imperfect. Fifth, the analysis focuses on the United States and other high-income economies; adoption and labor impacts in low- and middle-income countries may follow different pathways due to infrastructure gaps and institutional differences. These caveats underscore the need for ongoing monitoring, richer data, and methodological pluralism when assessing the future of work under generative AI.

7. Conclusions

Generative AI represents a fundamental shift in how knowledge, content, and services are created and disseminated. Enabled by technical advances in GANs, VAEs, diffusion models, transformers, and reinforcement learning from human feedback (RLHF), these models generate text, images, and code that increasingly rival human outputs. Yet the same capabilities that enable creativity and productivity also pose risks—particularly to routine cognitive labor.
This study synthesized the technical foundations, application domains, and labor market projections to provide an integrated perspective on the landscape of generative AI. A Disruption Index that quantifies occupational vulnerability based on exposure, adoption, time savings, and skill complementarity is introduced. Empirical estimates suggest that up to 30% of work hours may be impacted by 2030, with considerable heterogeneity across sectors and roles.
Critically, the labor consequences of generative AI are not technologically deterministic. Instead, they depend on institutional context, design choices, and governance structures. Historical evidence from prior general-purpose technologies—such as electrification and computing—indicates that gains in productivity and well-being emerge when technological innovation is paired with investments in human capital, organizational redesign, and regulatory adaptation.
The findings also highlight the risk of widening inequalities. Adoption and exposure differ across income groups, education levels, and regions, raising concerns about distributive justice. Left unaddressed, generative AI may entrench existing disparities. Responsible deployment requires attention to fairness, robustness, privacy, and transparency—not as afterthoughts, but as central design goals.
Ultimately, the future of generative AI is a choice. Whether it displaces or empowers workers depends not only on its technical trajectory but also on the policies, institutions, and values that guide its development and deployment.

Future Research Directions

Future research should focus on the following five priorities: (1) longitudinal and causal studies (e.g., natural experiments, panel and firm-level data) to separate displacement from augmentation and measure impacts on productivity, wages, employment, and skill development; (2) refining and extending the paper’s disruption framework with better data on time savings, task-level complementarity, and adoption differences, plus comparative institutional work across varying labor-market structures; (3) sustained work on fairness, bias, and accountability, including adapting metrics like demographic parity and equalized odds to open-ended outputs and strengthening auditing and sociotechnical studies of differential impacts on marginalized groups; (4) integrating environmental externalities such as energy and water use, with empirical analysis of performance–sustainability trade-offs and approaches like compression, low-precision arithmetic, and model reuse; and (5) deeper interdisciplinary collaboration across law, philosophy, and industrial relations, with information systems research helping connect technical capabilities to institutions and translating this into ethical, inclusive design principles.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  2. Li, X.; Meng, C.; Zhou, H.; Guo, Y.; Xue, B.; Yu, T.; Lu, Y. Generative Learning from Semantically Confused Label Distribution via Auto-Encoding Variational Bayes. Electronics 2025, 14, 2736. [Google Scholar] [CrossRef]
  3. Khader, F.; Müller-Franzes, G.; Tayebi Arasteh, S.; Han, T.; Haarburger, C.; Schulze-Hagen, M.; Schad, P.; Engelhardt, S.; Baeßler, B.; Foersch, S.; et al. Denoising diffusion probabilistic models for 3D medical image generation. Sci. Rep. 2023, 13, 7303. [Google Scholar] [CrossRef]
  4. Choi, M.; Kim, H.; Han, B.; Xu, N.; Lee, K.M. Channel Attention Is All You Need for Video Frame Interpolation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 10663–10671. [Google Scholar] [CrossRef]
  5. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar] [CrossRef]
  6. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative Adversarial Networks: An Overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
  7. Wang, K.; Gou, C.; Duan, Y.; Lin, Y.; Zheng, X.; Wang, F.Y. Generative adversarial networks: Introduction and outlook. IEEE/CAA J. Autom. Sin. 2017, 4, 588–598. [Google Scholar] [CrossRef]
  8. Gmyrek, P.; Berg, J.; Bescond, D. Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality; ILO: Geneva, Switzerland, 2023. [Google Scholar] [CrossRef]
  9. Hartley, J.; Jolevski, F.; Melo, V.; Moore, B. The Labor Market Effects of Generative Artificial Intelligence. 2025. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5136877 (accessed on 10 January 2026).
  10. Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing Work Productivity through Generative Artificial Intelligence: A Comprehensive Literature Review. Sustainability 2024, 16, 1166. [Google Scholar] [CrossRef]
  11. Maslej, N.; Fattorini, L.; Perrault, R.; Gil, Y.; Parli, V.; Kariuki, N.; Capstick, E.; Reuel, A.; Brynjolfsson, E.; Etchemendy, J.; et al. Artificial Intelligence Index Report 2025. arXiv 2025, arXiv:2504.07139. [Google Scholar] [CrossRef]
  12. Shou, M.; Jia, F.; Yu, J.; Wu, Y. Challenges in developing information and communication technology (ICT) use for rural e-governance: An ecology perspective. Inf. Syst. J. 2024, 35, 247–278. [Google Scholar] [CrossRef]
  13. Bick, A.; Blandin, A.; Deming, D.J. The Rapid Adoption of Generative AI. Manag. Sci. 2026. [Google Scholar] [CrossRef]
  14. Deranty, J.P.; Corbin, T. Artificial intelligence and work: A critical review of recent research from the social sciences. AI Soc. 2022, 39, 675–691. [Google Scholar] [CrossRef]
  15. Autor, D.H. Why Are There Still So Many Jobs? The History and Future of Workplace Automation. J. Econ. Perspect. 2015, 29, 3–30. [Google Scholar] [CrossRef]
  16. Howard, J. Artificial intelligence: Implications for the future of work. Am. J. Ind. Med. 2019, 62, 917–926. [Google Scholar] [CrossRef]
  17. Haefner, N.; Wincent, J.; Parida, V.; Gassmann, O. Artificial intelligence and innovation management: A review, framework, and research agenda. Technol. Forecast. Soc. Change 2021, 162, 120392. [Google Scholar] [CrossRef]
  18. Kuzior, A.; Sira, M.; Brożek, P. Use of Artificial Intelligence in Terms of Open Innovation Process and Management. Sustainability 2023, 15, 7205. [Google Scholar] [CrossRef]
  19. Natasia, S.R.; Wiranti, Y.T.; Parastika, A. Acceptance analysis of NUADU as e-learning platform using the Technology Acceptance Model (TAM) approach. Procedia Comput. Sci. 2022, 197, 512–520. [Google Scholar] [CrossRef]
  20. Pinaya, W.H.L.; Tudosiu, P.D.; Dafflon, J.; Da Costa, P.F.; Fernandez, V.; Nachev, P.; Ourselin, S.; Cardoso, M.J. Brain Imaging Generation with Latent Diffusion Models. In Deep Generative Models; Springer Nature: Cham, Switzerland, 2022; pp. 117–126. [Google Scholar] [CrossRef]
  21. Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a Few Examples: A Survey on Few-shot Learning. ACM Comput. Surv. 2020, 53, 1–34. [Google Scholar] [CrossRef]
  22. Huang, Y.; Fields, K.G.; Ma, Y. A tutorial on generative adversarial networks with application to classification of imbalanced data. Stat. Anal. Data Min. ASA Data Sci. J. 2021, 15, 543–552. [Google Scholar] [CrossRef] [PubMed]
  23. Modake, R.; Patil, D. Evaluating Generative AI Applications. Int. J. Glob. Innov. Solut. (IJGIS) 2024. [Google Scholar] [CrossRef]
  24. Leon, M.; Nápoles, G.; García, M.M.; Bello, R.; Vanhoof, K. Two Steps Individuals Travel Behavior Modeling through Fuzzy Cognitive Maps Pre-definition and Learning. In Advances in Soft Computing; Springer: Berlin/Heidelberg, Germany, 2011; pp. 82–94. [Google Scholar] [CrossRef]
  25. Buzducea (Drăgoi), C.A.; Drăgoi, M.V.; Cristoiu, C.; Puiu, R.A.; Puiu, M.; Petrea, G.; Navligu, B.C. Machine Learning in Education: Predicting Student Performance and Guiding Institutional Decisions. Educ. Sci. 2026, 16, 76. [Google Scholar] [CrossRef]
  26. Costa, C.J.; Aparicio, J.T.; Aparicio, M. Socio-Economic Consequences of Generative AI: A Review of Methodological Approaches. In Proceedings of 19th Iberian Conference on Information Systems and Technologies (CISTI 2024); Springer Nature: Cham, Switzerland, 2026; pp. 509–521. [Google Scholar] [CrossRef]
  27. Leon, M. Generative Artificial Intelligence and Prompt Engineering: A Comprehensive Guide to Models, Methods, and Best Practices. Adv. Sci. Technol. Eng. Syst. J. 2025, 10, 01–11. [Google Scholar] [CrossRef]
  28. Deng, Y.; Zheng, J.; Huang, L.; Kannan, K. Let Artificial Intelligence Be Your Shelf Watchdog: The Impact of Intelligent Image Processing-Powered Shelf Monitoring on Product Sales. MIS Q. 2023, 47, 1045–1072. [Google Scholar] [CrossRef]
  29. Abdel-Karim, B.; Pfeuffer, N.; Carl, K.V.; Hinz, O. How AI-Based Systems Can Induce Reflections: The Case of AI-Augmented Diagnostic Work. MIS Q. 2023, 47, 1395–1424. [Google Scholar] [CrossRef]
  30. Leon, M. GPT-5 and open-weight large language models: Advances in reasoning, transparency, and control. Inf. Syst. 2026, 136, 102620. [Google Scholar] [CrossRef]
  31. Ziakis, C. Generative Artificial Intelligence Adoption: An Exploration of Challenges and Perceptions. In The Economic Impact of Small and Medium-Sized Enterprises; Springer Nature: Cham, Switzerland, 2024; pp. 213–231. [Google Scholar] [CrossRef]
  32. Ali, H.; Mustafa, A.u.; Aysan, A.F. Global adoption of generative AI: What matters most? J. Econ. Technol. 2025, 3, 166–176. [Google Scholar] [CrossRef]
  33. Toderas, M. Artificial Intelligence for Sustainability: A Systematic Review and Critical Analysis of AI Applications, Challenges, and Future Directions. Sustainability 2025, 17, 8049. [Google Scholar] [CrossRef]
  34. Leon, M. The Escalating AI’s Energy Demands and the Imperative Need for Sustainable Solutions. WSEAS Trans. Syst. 2024, 23, 444–457. [Google Scholar] [CrossRef]
  35. Sannon, S.; Sun, B.; Cosley, D. Privacy, Surveillance, and Power in the Gig Economy. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–15. [Google Scholar] [CrossRef]
  36. Sabherwal, R.; Grover, V. The Societal Impacts of Generative Artificial Intelligence: A Balanced Perspective. J. Assoc. Inf. Syst. 2024, 25, 13–22. [Google Scholar] [CrossRef]
  37. Machine Learning and the City: Applications in Architecture and Urban Design; Wiley: Hoboken, NJ, USA, 2022. [CrossRef]
  38. Malagi, V.; Annapoorna, M.S.; Vinothkumar, H.; Dilavar, S.N.; Dahigaonkar, D.J.; Bhutani, M. Exploring Generative AI Models in Enhanced Communication Systems for Biomedical Solutions. In Innovative Computing and Communications; Springer Nature: Singapore, 2025; pp. 119–134. [Google Scholar] [CrossRef]
  39. Zhang, W.; Peng, Z.; Zhao, F.; Feng, B.; Mei, X. A novel deep reinforcement learning framework based on digital twins for dynamic job shop scheduling problems. Expert Syst. Appl. 2026, 296, 128708. [Google Scholar] [CrossRef]
  40. Wang, Q.; He, Y.; Tang, C. Mastering construction heuristics with self-play deep reinforcement learning. Neural Comput. Appl. 2022, 35, 4723–4738. [Google Scholar] [CrossRef]
  41. Leon, M. Cognitive mapping variants and their training algorithms. Comput. Sci. Rev. 2026, 59, 100862. [Google Scholar] [CrossRef]
  42. Li, B.; Huang, N.; Shi, W. Forced to Change? Media Exposure of Labor Issues and Firm Artificial Intelligence Investment. Inf. Syst. Res. 2025. [Google Scholar] [CrossRef]
  43. Zajko, M. AI as automated inequality: Statistics, surveillance and discrimination. In Handbook of Critical Studies of Artificial Intelligence; Edward Elgar Publishing: Cheltenham, UK, 2023; pp. 343–353. [Google Scholar] [CrossRef]
  44. Chen, P.; Wu, L.; Wang, L. AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications. Appl. Sci. 2023, 13, 10258. [Google Scholar] [CrossRef]
  45. Eloundou, T.; Manning, S.; Mishkin, P.; Rock, D. GPTs are GPTs: Labor market impact potential of LLMs. Science 2024, 384, 1306–1308. [Google Scholar] [CrossRef]
  46. Corvello, V. Generative AI and the future of innovation management: A human centered perspective and an agenda for future research. J. Open Innov. Technol. Mark. Complex. 2025, 11, 100456. [Google Scholar] [CrossRef]
  47. Zarifhonarvar, A. Economics of ChatGPT: A labor market view on the occupational impact of artificial intelligence. J. Electron. Bus. Digit. Econ. 2023, 3, 100–116. [Google Scholar] [CrossRef]
  48. Khan, S.; Mehmood, S.; Khan, S.U. Navigating innovation in the age of AI: How generative AI and innovation influence organizational performance in the manufacturing sector. J. Manuf. Technol. Manag. 2024, 36, 597–620. [Google Scholar] [CrossRef]
  49. Leon, M.; Depaire, B.; Vanhoof, K. Fuzzy Cognitive Maps with Rough Concepts. In Artificial Intelligence Applications and Innovations; Springer: Berlin/Heidelberg, Germany, 2013; pp. 527–536. [Google Scholar] [CrossRef]
  50. Shui, F.; Chen, G.; He, R.; Luo, D.; Wang, X. How to Ensure System Sustainability: Paradoxical Cognition and Adaptive Strategies for the Value Creation Process of Megaprojects. Systems 2025, 13, 334. [Google Scholar] [CrossRef]
  51. Wang, Y.; Yu, Y.; Khan, A. Digital sustainability: Dimension exploration and scale development. Acta Psychol. 2025, 256, 105028. [Google Scholar] [CrossRef] [PubMed]
  52. Guarascio, D.; Piccirillo, A.; Reljic, J. Robots vs. Workers: Evidence From a Meta-Analysis. J. Econ. Surv. 2025, 39, 2254–2271. [Google Scholar] [CrossRef]
  53. Antoniuk, D.; Koliada, O. Ensuring sustainable use of generative artificial intelligence by enterprises based on resource consumption optimization. East.-Eur. J. Enterp. Technol. 2025, 3, 68–77. [Google Scholar] [CrossRef]
  54. Leon, M. Leveraging Generative AI for On-Demand Tutoring as a New Paradigm in Education. Int. J. Cybern. Inform. 2024, 13, 17–29. [Google Scholar] [CrossRef]
  55. Zhang, D.D.; Peng, G.; Yao, Y.; Browning, T.R. Is a College Education Still Enough? The IT-Labor Relationship with Education Level, Task Routineness, and Artificial Intelligence. Inf. Syst. Res. 2024, 35, 992–1010. [Google Scholar] [CrossRef]
  56. Fügener, A.; Grahl, J.; Gupta, A.; Ketter, W. Cognitive Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path Toward Productive Delegation. Inf. Syst. Res. 2022, 33, 678–696. [Google Scholar] [CrossRef]
  57. Bandi, A.; Adapa, P.V.S.R.; Kuchi, Y.E.V.P.K. The Power of Generative AI: A Review of Requirements, Models, Input–Output Formats, Evaluation Metrics, and Challenges. Future Internet 2023, 15, 260. [Google Scholar] [CrossRef]
  58. Rhue, L. The Anchoring Effect, Algorithmic Fairness, and the Limits of Information Transparency for Emotion Artificial Intelligence. Inf. Syst. Res. 2024, 35, 1479–1496. [Google Scholar] [CrossRef]
  59. Turel, O.; Kalhan, S. Prejudiced against the Machine? Implicit Associations and the Transience of Algorithm Aversion. MIS Q. 2023, 47, 1369–1394. [Google Scholar] [CrossRef]
  60. Kim, B.R.; Srinivasan, K.; Kong, S.H.; Kim, J.H.; Shin, C.S.; Ram, S. ROLEX: A Novel Method for Interpretable Machine Learning Using Robust Local Explanations. MIS Q. 2023, 47, 1303–1332. [Google Scholar] [CrossRef]
  61. Miškufová, M.; Košíková, M.; Vašaničová, P.; Kiseľáková, D. Digitalization and Artificial Intelligence: A Comparative Study of Indices on Digital Competitiveness. Information 2025, 16, 286. [Google Scholar] [CrossRef]
Table 1. Historical milestones in generative AI.
Table 1. Historical milestones in generative AI.
YearMilestoneDescriptionImplications
2013–2014GANs [1]Introduced adversarial training between generator and discriminator networks.Enabled realistic image synthesis and spurred research on adversarial games.
2013–2014VAEs [2]Formulated variational inference for latent variable models, maximizing a tractable evidence lower bound (ELBO).Provided stochastic encoders/decoders for generative modeling and representation learning.
2017Transformer [4]Proposed self-attention mechanism with no recurrence; improved parallelism and performance on machine translation.Became the foundation for large language models (LLMs) such as BERT [5] and the GPT series.
2020Diffusion Models [3]Introduced denoising diffusion probabilistic models that learn a reverse diffusion process.Achieved high-fidelity image generation; later extended to latent diffusion [20].
2022Instruction-tuned LLMsOpenAI’s InstructGPT used RLHF to align models with human instructions [6].Improved usefulness and reduced toxicity of LLM outputs; led to ChatGPT and similar assistants.
Table 2. Comparison of major generative models.
Table 2. Comparison of major generative models.
Model ClassObjective FunctionStrengthsLimitations
GANsMinimax game in Equation (1) between generator G and discriminator DProduces sharp, realistic samples; flexible architectures; conditional generation possibleTraining instability; mode collapse; difficulty estimating likelihood [1]
VAEsMaximize ELBO in Equation (2); latent variable inferenceEfficient learning and inference; continuous latent space enables interpolation; likelihood tractableTendency to produce blurry outputs; trade-off between reconstruction fidelity and latent regularization [2]
DiffusionMinimize variational bound on negative log-likelihood; learn reverse denoising processHigh sample quality; stable training; straightforward likelihood estimationSlow sampling due to many timesteps; sensitive to variance schedule; computationally intensive [3]
Transformers (LLMs)Next-token prediction using self-attention in Equation (3); pre-training on large corporaScalability; universal approximators of sequence data; strong zero-shot and few-shot performanceResource intensive; susceptible to hallucination and bias; alignment challenges [4,21]
RLHFMaximize expected human reward subject to KL regularizationAligns outputs with human intent; reduces toxicity; improves usefulnessRequires costly human feedback; reward model may encode biases; may degrade base model quality [6,7]
Table 3. Selected evaluation metrics for generative models.
Table 3. Selected evaluation metrics for generative models.
MetricFormulaInterpretation
Perplexity exp 1 N t = 1 N log p θ ( x t x < t ) Lower perplexity indicates better predictive performance for language models.
FID (Fréchet Inception Distance) μ x μ g 2 2 + Tr Σ x + Σ g 2 ( Σ x Σ g ) 1 / 2 Measures similarity between real and generated feature distributions; lower values are better.
Inception Score exp E x p g KL p ( y x ) p ( y ) Evaluates diversity and quality of generated images via an Inception classifier; higher is better.
BLEU BLEU = exp n = 1 N w n log p n n-gram precision metric for machine translation and text generation; higher values indicate a closer match to reference texts.
ROUGE-L F LCS = ( 1 + β 2 ) LCS ( X , Y ) | X | + β 2 | Y | Measures the longest common subsequence overlap for summarization tasks; the higher the value, the better.
Table 4. Generative AI risk categories and mitigation actions.
Table 4. Generative AI risk categories and mitigation actions.
Risk CategoryDescriptionRecommended Mitigation
FairnessModels may perpetuate historical biases present in training data, leading to discriminatory outputsConduct bias audits; diversify and balance training datasets; implement fairness regularization or constraints.
PrivacyGenerated outputs may memorize or leak sensitive personal informationEmploy differential privacy, restrict training on protected data, and enforce data minimization and access controls.
RobustnessGenerative models can produce unsafe or hallucinated content, or be susceptible to adversarial promptsAdopt adversarial training, integrate guardrails and content filters, and implement uncertainty estimation and fallbacks to enhance security.
TransparencyOpaque generation processes and proprietary training data reduce user trustProvide documentation, model cards, and transparency reports; offer explanations for model decisions; facilitate user feedback.
SecurityMalicious actors may misuse generative models to automate scams, fraud, or misinformationMonitor usage for potential abuse; implement usage policies and user verification procedures; align with regulatory standards while cooperating with law enforcement.
Table 5. Generative AI Adoption by demographic group (U.S., Aug 2024).
Table 5. Generative AI Adoption by demographic group (U.S., Aug 2024).
GroupAt Work (%)Daily Use (%)
Gender
   Men∼32∼12
   Women∼23∼9
Age
   Under 40 years3411
   50+ years175
Education
   Bachelor’s or higher4014
   No college degree207
Occupation
   Computer/Mathematical 49.6 18
   Management 49.0 16
   Blue collar 22.1 6
Table 6. Business adoption of AI and generative AI across countries.
Table 6. Business adoption of AI and generative AI across countries.
CountryAI 2023 (%)AI 2024 (%)Generative AI Highlights
Canada6.110.69.3% used genAI in Q1 2024; 4.6% planned adoption
U.K.9.022.0ONS projects adoption will triple from 2023 to 2024
United States4.28.38.3% used generative AI for production in Apr 2025; 10.9% plan adoption
Denmark16.025.0Communication sector exceeds 44% AI usage
Sweden15.024.0Two-thirds of ICT firms adopted AI in 2024
Estonia10.027.0Adoption surged 2.7× after genAI release
Table 7. Representative use cases of generative AI across sectors.
Table 7. Representative use cases of generative AI across sectors.
SectorApplicationBenefitsRisks/Exposures
Customer operationsChatbots and virtual assistants handle queries and generate responses.Reduces workloads, improves response time, and personalizes support.Hallucinations, privacy concerns, staff deskilling.
Marketing and salesGenerates marketing copy, emails, and ad creatives.Boosts productivity, enables A/B testing, and personalizes at scale.Bias, manipulation, compliance issues.
Software engineeringCode assistants autocomplete, generate boilerplate, and suggest fixes.Speeds development, improves consistency.Vulnerabilities, licensing issues, skill erosion.
Research and developmentMolecular design, drug discovery, and material synthesis.Accelerates discovery, and reduces lab costs.Bias, ethical concerns, harmful agents.
HealthcareDrafts notes, summarizes records, and assists in diagnosis.Reduces admin burden, enhances decision support.Hallucinations, privacy breaches, safety risks.
FinanceSummarizes reports, aids portfolio analysis, and generates scenarios.Boosts efficiency, detects fraud patterns.Model risk, regulation uncertainty.
Legal servicesDrafts contracts, memos, and summarizes case law.Saves time, expands access to justice.Errors, privacy risks, and liability.
LogisticsOptimizes routing and scheduling, handles inquiries.Improves efficiency, delivery accuracy.Deskilling, uneven adoption.
Creative industriesProduces art, music, and designs.Augments creativity, lowers entry barriers.Copyright issues, artist displacement.
EducationGenerates learning materials and tutoring support.Improves access, frees instructor time.Misinformation, bias, integrity risks.
Human resourcesWrites job descriptions and screens résumés.Speeds hiring, standardizes messaging.Bias reproduction, privacy concerns.
Table 8. Extended sectoral use cases, economic potential, and adoption.
Table 8. Extended sectoral use cases, economic potential, and adoption.
SectorExpanded ApplicationsValue Potential and AdoptionRisks and Considerations
ManufacturingGenerative component design, process plan optimization, predictive maintenance, supply-chain simulation.51% adoption of code copilots; 31% for support chatbots. Investments rose from $2.3B (2023) to $13.8B (2024). 72% of leaders expect broader use [31].IP ambiguity, deskilling, design safety, and domain-expert dependence [30].
AgricultureSynthetic crop imagery, weather simulation, soil/nutrient modeling, protein design.High value potential in yield and sustainability. Pilots ongoing; adoption limited by rural infrastructure. Estimated long-term value in hundreds of billions [32].Data scarcity, model bias, uneven access, and effects on smallholder livelihoods.
EnergyForecasting, grid dispatch optimization, battery/catalyst design, virtual energy advisors.Adoption is in its early stages, but promising for renewable integration. Dependent on climate alignment and regulatory clarity [26,34].Grid strain, rebound effects, opacity, and policy risk [33].
GovernmentChatbots, translation, policy drafting, comment summarization, and legal research.Pilots launched in tax and legal systems. Efficiency potential in licensing and legislation workflows [35,36].Bias in decision making, accountability gaps, privacy/security concerns, and the need for human oversight [37].
ScienceMolecular and protein design, accelerated simulations, literature synthesis, and experimental code generation.Enhances speed in drug discovery and materials science. Value manifests through faster innovation [38,41].Reproducibility issues, hallucination in synthesis, ethics in synthetic biology, credit assignment [26,39].
MediaArt, music, journalism, VFX, dubbing, and co-creation.Rapid adoption via accessible tools; reshaping platform economics and business models.Copyright disputes, deepfakes, misinformation risks, and concerns about content authenticity [43,44].
Table 9. Estimated generative AI disruption metrics by occupation (2025).
Table 9. Estimated generative AI disruption metrics by occupation (2025).
OccupationExposure E j (%)Adoption A j , 2025 (%)Time Savings S j , 2025 (%)
Software developers20358
Paralegals/assistants50156
Logistics managers90305
Claims adjusters70257
Personal financial advisors20104
Bus/truck mechanics051
Clerical support workers82206
Data scientists155010
Table 10. Projected generative AI disruption index by occupation (2030).
Table 10. Projected generative AI disruption index by occupation (2030).
OccupationExposure E j (%)Adoption A j , 2030 (%) DI j , 2030
Software developers25650.44
Paralegals/assistants60400.59
Logistics managers95600.77
Claims adjusters75500.68
Personal financial advisors30250.32
Bus/truck mechanics0100.06
Clerical support workers85500.71
Data scientists20800.35
Table 11. Stakeholder-specific implications for generative AI governance and adoption.
Table 11. Stakeholder-specific implications for generative AI governance and adoption.
StakeholderKey Implications
PolicymakersMandate algorithmic impact assessments, energy and water usage reporting, and bias audits. Fund large-scale reskilling programs. Enact wage insurance and portable benefits. Balance innovation incentives with safeguards through adaptive regulation.
Firms and EmployersAlign generative AI deployment with business objectives. Embed human oversight and context-aware evaluation. Retrain employees for augmented roles. Monitor bias, performance drift, and legal risks to ensure compliance. Adopt open innovation practices for safe technology integration.
Researchers and ScholarsConduct longitudinal studies on productivity and inequality. Refine disruption metrics using granular task-level data. Explore fairness metrics for generative outputs. Contribute to interdisciplinary debates on IP, authorship, and liability. Develop benchmarks and best practices for empirical studies.
Technology ProvidersImprove energy efficiency (e.g., through model sparsity and quantization). Support fairness audits and interpretability tools (e.g., ROLEX). Disclose model limitations and sources of training data. Engage in collaborative governance forums.
End Users and InstitutionsCultivate AI literacy and digital sobriety. Develop norms for responsible use of AI in education, healthcare, and law. Participate in audits, provide feedback, and demand transparency. Balance convenience with awareness of systemic risks.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leon, M. Generative AI as a General-Purpose Technology: Foundations, Applications, and Labor Market Implications Through 2030. Big Data Cogn. Comput. 2026, 10, 69. https://doi.org/10.3390/bdcc10030069

AMA Style

Leon M. Generative AI as a General-Purpose Technology: Foundations, Applications, and Labor Market Implications Through 2030. Big Data and Cognitive Computing. 2026; 10(3):69. https://doi.org/10.3390/bdcc10030069

Chicago/Turabian Style

Leon, Maikel. 2026. "Generative AI as a General-Purpose Technology: Foundations, Applications, and Labor Market Implications Through 2030" Big Data and Cognitive Computing 10, no. 3: 69. https://doi.org/10.3390/bdcc10030069

APA Style

Leon, M. (2026). Generative AI as a General-Purpose Technology: Foundations, Applications, and Labor Market Implications Through 2030. Big Data and Cognitive Computing, 10(3), 69. https://doi.org/10.3390/bdcc10030069

Article Metrics

Back to TopTop