Next Article in Journal
Correction: Williady et al. Investigating Efficiency and Innovation: An Exploratory and Predictive Analysis of Smart Airport Systems. Digital 2024, 4, 599–612
Previous Article in Journal
Lightweight Interpretable Deep Learning Model for Nutrient Analysis in Mobile Health Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy

by
Dag Øivind Madsen
1,* and
David Matthew Toston II
2
1
Department of Business, Marketing and Law, USN School of Business, University of South-Eastern Norway, 3511 Hønefoss, Norway
2
Independent Researcher, Roseville, CA 95747, USA
*
Author to whom correspondence should be addressed.
Digital 2025, 5(3), 24; https://doi.org/10.3390/digital5030024
Submission received: 21 February 2025 / Revised: 11 June 2025 / Accepted: 13 June 2025 / Published: 28 June 2025

Abstract

ChatGPT, a prominent large language model developed by OpenAI, has rapidly become embedded in digital infrastructures across various sectors. This narrative review examines its evolving role and societal implications in three key domains: healthcare, education, and the economy. Drawing on recent literature and examples, the review explores ChatGPT’s applications, limitations, and ethical challenges in each context. In healthcare, the model is used to support patient communication and mental health services, while raising concerns about misinformation and privacy. In education, it offers new forms of personalized learning and feedback, but also complicates assessment and equity. In the economy, ChatGPT augments business operations and knowledge work, yet introduces risks related to job displacement, data governance, and automation bias. The review synthesizes these developments to highlight how ChatGPT is driving digital transformation while generating new demands for oversight, regulation, and critical inquiry. It concludes by outlining priorities for future research and policy, emphasizing the need for interdisciplinary collaboration, transparency, and inclusive access as generative AI continues to evolve.

1. Introduction

Large Language Models (LLMs), such as ChatGPT, represent a pivotal advancement in artificial intelligence (AI) [1,2], particularly in the context of digital transformation across key sectors like healthcare, education, and the economy. These domains have witnessed accelerated integration of conversational AI into digital infrastructures—including telemedicine systems, learning management environments, and economic decision-making tools—making the assessment of LLMs’ societal role especially urgent and relevant. ChatGPT’s widespread use across these areas exemplifies both the promise and the complexity of AI-driven technologies in digitally mediated contexts.
ChatGPT’s development is rooted in decades of progress in machine learning and natural language processing (NLP) [3,4,5,6]. It has advanced through several iterations, from GPT-2 to GPT-4, each contributing improvements in accuracy, coherence, and safety [7,8,9]. OpenAI’s iterative model refinement has enabled ChatGPT to support increasingly complex applications, with newer versions capable of context-sensitive language generation, multilingual communication, and domain-specific support. These developments have positioned ChatGPT as a significant driver of digital innovation.
This review addresses three guiding questions: (1) What are the current capabilities and limitations of ChatGPT in digitally transforming healthcare, education, and economic domains? (2) What ethical and societal risks emerge from this integration? (3) What are the implications for policy, digital governance, and future research? By focusing on these questions, the review aims to synthesize emerging insights into ChatGPT’s evolving role across key pillars of the digital landscape.
A narrative review methodology [10,11] was adopted due to the diversity and rapid evolution of the literature on ChatGPT. This approach allows for a flexible yet integrative synthesis of heterogeneous studies, making it particularly suitable for capturing dynamic trends across multiple domains. While it does not follow a systematic protocol, the narrative format facilitates conceptual mapping and thematic organization across broad application areas [10,11]. The literature used in this review was identified through searches in Scopus, Web of Science, and Google Scholar using combinations of keywords such as “ChatGPT,” “large language models,” “digital health,” “AI in education,” “GPT economy,” “AI bias,” and “LLM ethics.” Additional sources were identified through citation chaining, snowballing, and expert recommendations.
Several recent reviews have explored ChatGPT’s technical features, potential applications in specific domains, or bibliometric patterns [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. While these offer valuable insights, most are either domain-specific, descriptive in focus, or lack integration across applied contexts. In contrast, the present review provides a cross-sectoral synthesis of ChatGPT’s role in digitally transforming healthcare, education, and the economy—three public-facing domains where generative AI is rapidly being integrated. Table 1 summarizes how this review builds on and complements existing literature by addressing applications, ethical concerns, and governance challenges through a societal and policy-oriented lens.
The evolution of ChatGPT—and of generative AI more broadly—has important implications for the field of digital studies. As LLMs are deployed within critical infrastructures, they affect not only communication and knowledge production but also service delivery, governance, and equity. This review provides a cross-sectoral overview of these impacts with an emphasis on digital applications. It also considers how the adoption of ChatGPT intersects with broader technological trends such as automation, platformization, and datafication in digital societies [28,29,30,31,32].
The remainder of this review is structured as follows. Section 2 explores patterns of adoption and interest in ChatGPT, highlighting uptake within health, education, and the economy. Section 3 synthesizes application domains, organized by these three sectors. Section 4 discusses key ethical and technical limitations, including hallucinations, bias, and privacy concerns. Section 5 examines societal implications such as misinformation, equity, and labor displacement. Section 6 identifies future research directions with a focus on sector-specific needs and regulatory developments. Section 7 concludes with reflections on the role of ChatGPT in digital transformation and policy design.

2. Popularity and Uptake

Since its public release in late 2022, ChatGPT has experienced meteoric growth, becoming one of the most widely adopted digital tools in recent memory [33,34,35,36]. Surpassing 100 million users within its first two months, ChatGPT’s adoption trajectory rivals that of past digital disruptors like smartphones and social media platforms [5,14,37]. Yet its ascent is not just a story of viral popularity—it reflects a deeper shift in how individuals, institutions, and industries are reimagining human–machine collaboration in digitally mediated environments [38,39,40].
Much of this uptake stems from ChatGPT’s frictionless accessibility and integration into existing digital ecosystems. The model is available across web interfaces, mobile apps, browser extensions, and APIs, enabling seamless deployment into real-world workflows. In healthcare, ChatGPT has been embedded into electronic health record systems like Epic to assist clinicians with documentation and patient communication. In education, learning management systems such as Moodle and Canvas now feature plug-ins and third-party tools powered by LLMs for grading assistance and feedback generation. Meanwhile, in the economic domain, entrepreneurs and analysts are leveraging ChatGPT to draft business plans, analyze customer sentiment, and automate spreadsheet-based tasks [41,42,43]—all within familiar digital tools like Microsoft Excel or Notion.
Public curiosity and institutional interest in ChatGPT [44,45,46] are also reflected in digital attention metrics like Google Trends [47,48]. Figure 1 displays Google Trends data for “ChatGPT,” which shows a dramatic spike in September 2024. This coincided with OpenAI’s developer conference and the release of new multimodal features in GPT-4, including voice, image, and vision capabilities [49]. The timing also overlapped with the return to school and Q4 planning in many industries, suggesting that ChatGPT is not just a flash in the pan but increasingly embedded in academic and economic cycles.
Institutional adoption is now moving from pilot projects to systemic experimentation. Hospitals are trialing chatbot triage systems and patient engagement tools [50,51,52]. Universities are grappling with course redesigns and integrity policies [53,54,55,56,57]. Businesses are automating customer service, report writing, and forecasting [58,59,60,61]. In each case, ChatGPT is not merely a novelty—it is becoming a component in the digital infrastructures that support work, learning, and decision-making.
The emergence of rival LLMs such as Google Gemini, Meta’s Llama, Anthropic’s Claude, and China’s DeepSeek has further fueled a competitive landscape dubbed the “war of the chatbots” [62]. This rivalry is driving rapid innovation and placing pressure on developers to enhance safety, responsiveness, and domain specialization. Yet despite this crowded field, ChatGPT remains a central player—partly due to its early mover advantage, but also because of its integration across platforms and its cultural ubiquity.
In short, ChatGPT’s popularity is not just a metric of use but a signal of shifting digital norms. As its adoption deepens, the crucial question is not just how many are using it—but how it is being used across sectors. The next section turns to that question, examining ChatGPT’s emerging applications in healthcare, education, and the economy.

3. Applications

ChatGPT’s rapid uptake is not only a function of novelty but also of its real-world utility across key domains of public and economic life [61]. While the model has been tested in fields as diverse as tourism [59,63,64,65], programming [66,67,68], and entertainment [69,70,71], this review focuses on three sectors—healthcare, education, and the economy—where its digital integration is particularly impactful.
The selection of healthcare, education, and the economy is based on three intersecting criteria: (1) the volume and maturity of recent research across these domains, providing a robust base for synthesis; (2) their societal and institutional relevance, as these sectors affect large populations and involve critical infrastructures; and (3) their alignment with Digital’s scope, which emphasizes the role of emerging technologies in reshaping public-facing services, knowledge systems, and economic activity. While ChatGPT is already being explored in other areas, such as tourism and software development, these three domains are especially prominent in terms of visibility, integration, and the urgency of ethical and governance challenges. Each also offers a distinctive lens into how generative AI is transforming professional practices, service delivery, and digital workflows—while introducing new risks and dilemmas that merit careful attention.

3.1. Healthcare

In healthcare, ChatGPT has been adopted as a digital assistant across multiple functions. Early implementations include automated triage tools, virtual medical chatbots, and mental health companions [12,51,72]. Some hospital systems, such as those using Epic, have piloted ChatGPT to help clinicians summarize notes, answer routine patient questions, and generate discharge instructions. Its language simplification capabilities make it especially useful for translating complex terminology into more understandable patient communication [73].
Mental health applications have also received growing attention [74,75,76,77]. Studies report its potential to assist with reflection, emotional regulation, and cognitive reframing when used alongside traditional therapy [75,78]. Despite promising findings, researchers caution against over-reliance due to ChatGPT’s hallucination risk, where the system generates plausible but factually incorrect statements. This is particularly hazardous in high-stakes medical settings. Additional concerns include algorithmic bias in diagnostic suggestions, privacy vulnerabilities under HIPAA/GDPR, and reduced patient trust in AI-mediated communication.

3.2. Education

In the educational sphere, ChatGPT has triggered widespread experimentation and debate [26,79,80,81,82]. Educators are exploring its use as a tutoring agent, feedback generator, lesson planner, and brainstorming partner [83,84]. Its integration into learning management systems (e.g., Moodle, Canvas) allows for dynamic question-answering, automated essay scoring, and interactive content creation [85,86,87,88].
Applications span all levels of education [79]—from early childhood education to higher education and adult learning [89,90,91,92,93,94] as well as special education [95]. In academic writing, chatbots like ChatGPT are frequently used to clarify research concepts, propose outlines, and edit grammar, raising new questions about originality and authorship [96,97,98]. Some higher education institutions have issued bans or introduced honor codes [99,100], while others are actively building curricula around AI literacy and critical use [101,102,103].
While these innovations open new paths for personalized learning and pedagogical innovation, they also introduce serious challenges. Researchers have raised alarms about academic integrity, the erosion of critical thinking, and the possibility of reinforcing educational inequalities, particularly if access to AI tools is unevenly distributed [104,105].

3.3. Economy

In the economic domain, ChatGPT is being positioned as a low-cost productivity enhancer [106,107]. It is used by businesses of all sizes to generate marketing content, summarize reports, draft emails, analyze customer feedback, and automate repetitive white-collar tasks [42,58,65,108,109,110]. Freelancers and small enterprises are using it to write business plans, pitch decks, and even job descriptions—sometimes reducing or replacing the need for professional services.
Larger firms have started to integrate ChatGPT into business intelligence, forecasting, and customer service automation systems [41,111,112,113]. These developments promise major gains in speed and cost-efficiency but also come with emerging risks. In particular, analysts and commentators have warned of financial hallucinations in forecasting tools, confidential data leakage from model fine-tuning, and regulatory blind spots surrounding the use of generative AI in strategic decisions [60,114,115].
There is also growing concern that LLMs may widen gaps between digitally literate and under-resourced organizations. Companies with access to high-quality data and AI expertise may benefit disproportionately from ChatGPT’s capabilities—amplifying existing inequalities in the digital economy [116].
To provide a concise overview of how ChatGPT is being integrated into different sectors, Table 2 summarizes key use cases, opportunities, and associated risks across healthcare, education, and the economy. This cross-domain snapshot highlights how generative AI is not only enhancing efficiency and personalization but also introducing new ethical, technical, and social challenges. Understanding these trade-offs is crucial for evaluating ChatGPT’s role in digital transformation efforts across industries.
While the applications of ChatGPT across sectors offer notable benefits, they also raise significant ethical concerns. As the technology becomes more deeply embedded in sensitive domains such as healthcare, education, and the economy, questions around bias, misinformation, data privacy, and accountability become increasingly urgent. The following section examines these challenges in detail, highlighting the specific risks and limitations that accompany ChatGPT’s growing presence in digitally mediated environments.

4. Ethical Considerations and Limitations

While ChatGPT offers significant opportunities across healthcare, education, and the economy, its deployment also raises complex ethical questions [5,117,118,119]. These concerns go beyond technical limitations and touch on deeper issues of trust, responsibility, equity, and environmental sustainability. This section organizes the discussion around domain-specific risks in healthcare, education, and the economy, followed by a set of cross-cutting ethical challenges.

4.1. Ethical Implications in Healthcare

The healthcare sector faces particularly acute ethical risks due to the high-stakes nature of medical information. Although ChatGPT can simplify medical terminology, assist with clinical documentation, and promote reflective thinking in mental health contexts, there are several ethical issues that should be considered [120]. For example, its use is constrained by the potential for hallucinations—confidently presented but factually incorrect outputs. Inaccurate information about treatments, medication dosages, or diagnoses can have life-threatening consequences [12,121].
Additionally, biases in training data may result in diagnostic outputs that reflect historical inequities. For instance, research has shown that AI-generated responses may be less accurate for underrepresented populations, compounding disparities in health outcomes [20]. Privacy is another significant concern. Users interacting with ChatGPT through third-party platforms may unknowingly disclose sensitive health data, which could be stored or processed outside legal safeguards such as HIPAA and GDPR [122,123].
These concerns underscore the need for human oversight, ethical triage protocols, and clear guidelines on the responsible use of LLMs in clinical environments. Without these, patients may experience diminished trust, reduced autonomy, or unintended harm due to AI-mediated interactions.

4.2. Ethical Implications in Education

In educational settings, the ethical stakes center around intellectual integrity, epistemic reliability, and equitable access [19,124,125]. ChatGPT’s ability to generate essays, solve problems, and offer tutoring assistance has challenged traditional assessment models. While this can support personalized learning, it may also facilitate academic dishonesty and short-circuit the cognitive struggle essential to deep learning [104,126].
Moreover, generative outputs are often plausible but incorrect, potentially leading students to absorb misleading information. These “hallucinations” pose a subtle but pervasive risk in environments where learners may lack the domain knowledge to distinguish fact from fabrication [127,128]. Instructors must therefore rethink evaluation practices and promote critical AI literacy to help students engage responsibly with these tools.
Access also remains uneven. Schools with greater digital infrastructure and training resources are better positioned to integrate ChatGPT meaningfully, while under-resourced institutions risk being left behind. This digital divide [129] may exacerbate pre-existing educational inequalities, particularly when generative AI becomes central to learning platforms [18].

4.3. Ethical Implications in the Economy

In economic domains, ChatGPT’s ethical implications intersect with issues of transparency, labor automation, and data governance [116,117]. Businesses increasingly rely on ChatGPT to automate communication, market analysis, and strategic planning [7,61,65]. However, users may misinterpret AI outputs as authoritative, especially when using ChatGPT for financial forecasting or legal documentation—tasks where source traceability and factual precision are critical.
The proprietary nature of LLMs also complicates data accountability. Users may input sensitive business information (e.g., unreleased financial results or client strategies) without realizing that these interactions could be stored, cached, or leaked. Although providers like OpenAI limit data retention, platform-specific implementations and user misunderstandings can still lead to breaches of confidentiality.
Meanwhile, the automation of white-collar tasks—report writing, customer service, coding—raises concerns about job displacement and labor deskilling, especially for early-career professionals. As ChatGPT augments or replaces certain forms of knowledge work, organizations must consider how to retrain displaced workers and ensure inclusive digital transitions [116].

4.4. Cross-Cutting Issues: Privacy, Hallucinations, and Environmental Costs

Several ethical risks transcend individual sectors. Among these, privacy and security must be highlighted [122,130,131]. Whether in a classroom, clinic, or company, users may share sensitive information without fully understanding how their data are stored, used, or retained. Even when companies assert that user inputs are not used for training by default, such practices vary by implementation. A lack of transparency in these interactions can erode user trust and expose organizations to reputational and legal risk [117].
Another persistent issue is hallucination—a well-documented limitation of LLMs where outputs sound convincing but contain factual errors [132,133]. In education, this can mislead students and even researchers [128,134,135,136]; in healthcare, it can be dangerous [137,138,139]; in business, it can be costly [140]. The growing risk of misinformation from authoritative-sounding AI highlights the need for human verification, especially in high-stakes contexts [133].
Finally, the environmental footprint of ChatGPT and similar models is an often-overlooked ethical dimension [141,142]. Running large LLMs consumes substantial amounts of energy and water—particularly for cooling server farms and supporting high-volume inference calls [142,143,144]. These environmental costs are typically hidden from users but carry serious implications for sustainability and resource equity [127,142].
Together, these issues point to the need for comprehensive ethical frameworks tailored to digital applications. While ChatGPT offers novel capabilities, its responsible deployment requires transparency, oversight, and ongoing interdisciplinary collaboration.
Table 3 presents a sector-specific breakdown of key ethical concerns associated with ChatGPT—namely bias, hallucinations, and privacy risks. Each concern is contextualized with representative scenarios in healthcare, education, and economic environments to illustrate real-world consequences of ChatGPT’s digital deployment.
As shown in Table 3, the ethical risks associated with ChatGPT are not uniform but vary significantly across domains. In healthcare, the consequences of biased or hallucinated outputs can be life-threatening, while in education, they may undermine learning integrity and widen achievement gaps. In economic settings, privacy breaches and misinformation can result in financial losses and reputational damage. These distinctions underscore the importance of sector-specific safeguards, including regulatory oversight, contextual awareness in model design, and clear usage policies tailored to each domain. Addressing these concerns proactively is essential to ensuring that ChatGPT’s integration into digital infrastructures enhances rather than erodes public trust.
These ethical challenges not only affect individual users and organizations but also have broader societal ramifications. As ChatGPT becomes embedded in critical digital infrastructures, its influence extends beyond technical performance to shape how people access information, interact with institutions, and experience social inclusion or exclusion. The following section explores these cross-sectoral societal implications, with particular attention to public trust, equity, and the risks of misinformation in an AI-mediated digital landscape.

5. Societal Implications

ChatGPT’s rise is not merely a technological development—it is a cultural and social inflection point [142,145]. As large language models become embedded in digital platforms across healthcare, education, and the economy, they begin to reshape how people access knowledge, interact with institutions, and participate in civic life. This section explores the broader societal implications of ChatGPT’s integration across these three domains, with attention to public trust, equity, and misinformation in the digital age.

5.1. Implications for Digital Health

The incorporation of ChatGPT into digital health infrastructures has the potential to enhance access to care, particularly in underserved or remote settings. ChatGPT-powered systems can provide immediate, language-simplified explanations of symptoms, treatment options, and medical terminology, helping bridge health literacy gaps [146,147]. Patients interacting with online portals, health apps, or digital assistants may benefit from 24/7 support that augments overloaded health systems.
Yet these benefits come with notable risks. Users may mistake ChatGPT’s fluency for expertise, leading to misdiagnoses, delayed treatment, or misplaced trust in automated advice. As chatbot-based triage systems become more common, there is a growing need for AI literacy among patients and clinicians alike. Societal trust in digital health depends not only on technological performance but also on transparent communication, clear disclaimers, and strong regulatory oversight to ensure that the use of ChatGPT complements—not replaces—human judgment.

5.2. Implications for Digital Education

In education, ChatGPT represents both a pedagogical innovation and a potential disruption to the traditional learning process. Students increasingly rely on ChatGPT to draft essays, solve equations, and clarify complex topics [148,149]. For some, this enhances engagement and learning flexibility; for others, it poses a shortcut that undermines critical thinking and original inquiry [21,150].
The societal implications here are twofold. First, there is concern about the erosion of student autonomy. The temptation to outsource cognitive effort may weaken essential skills such as argumentation, synthesis, and academic integrity. Second, there is a growing digital equity gap. While wealthier institutions may offer guided AI use and infrastructure, schools with limited resources risk being left behind—exacerbating existing educational inequalities. As ChatGPT becomes normalized in classrooms, educators must grapple with the ethics of automation, surveillance, and intellectual development [19,124,125].

5.3. Implications for the Digital Economy

In economic life, ChatGPT’s impact is most evident in labor and productivity. By automating tasks such as content generation, translation, customer support, and basic data analysis, ChatGPT is redefining what constitutes skilled labor in the digital economy [61,106]. While this automation can boost efficiency, especially for small businesses and entrepreneurs, it may also result in job displacement and task deskilling, particularly for entry-level knowledge workers.
The implications extend beyond the workplace. As ChatGPT is integrated into financial advising, legal drafting, and strategic forecasting, questions arise about accountability and informational reliability. Errors or hallucinations in AI-generated business content can lead to misinformed decisions, undermining trust in economic systems. Additionally, the competitive advantage conferred by ChatGPT may be unevenly distributed, favoring firms with digital expertise and access to high-quality data while marginalizing those without.

5.4. Cross-Sectoral Issues: Trust, Equity, and Misinformation

Across healthcare, education, and the economy, three cross-cutting societal challenges stand out:
The first challenge is related to trust. ChatGPT’s linguistic sophistication often masks its underlying limitations. This creates what some scholars describe as the “Eliza effect” —users anthropomorphize the system and assume it “understands” more than it does [151,152]. In domains where precision and accountability matter, this illusion of intelligence may lead to poor decision-making or misplaced confidence in AI-generated content.
Second, there is the issue of equity. The benefits of ChatGPT are not evenly distributed. Communities with limited internet access, outdated hardware, or low digital literacy face barriers to using such tools effectively [143,153,154]. As governments and institutions digitize services using LLMs, the risk of exclusion grows—amplifying structural inequalities along lines of income, geography, and education.
Third, there are challenges related to misinformation. Perhaps the most urgent concern is ChatGPT’s potential to produce and spread convincing but false information [155,156]. From fictitious citations in academic contexts to persuasive political propaganda, the model’s outputs can be used—intentionally or unintentionally—to mislead. When integrated into social media platforms or content farms, ChatGPT may accelerate the erosion of public trust in information systems [155].
Addressing these challenges requires multi-stakeholder collaboration. Regulators must develop digital-specific policies [157,158]. Educators must teach critical AI literacy [101,102]. Developers must prioritize safety, transparency, and inclusivity. And civil society must remain vigilant in holding platforms and institutions accountable for how generative AI reshapes the digital public sphere.
To illustrate the broader consequences of ChatGPT’s integration across digital infrastructures, Table 4 summarizes key societal impacts in healthcare, education, and the economy. The table highlights how issues such as access, equity, misinformation, labor dynamics, and public trust play out differently across domains. These cross-cutting themes demonstrate that ChatGPT’s role is not merely technical—it actively shapes institutional practices, digital participation, and societal norms. Understanding these sector-specific effects is essential for designing inclusive and resilient AI ecosystems.
These societal impacts underscore the need for proactive, interdisciplinary approaches to guide ChatGPT’s development and deployment. As the technology continues to evolve, researchers, practitioners, and policymakers must not only respond to current challenges but also anticipate future ones. The following section outlines key research priorities that address both domain-specific needs and broader questions of governance, equity, and technological advancement in a rapidly transforming digital landscape.

6. Future Research Directions

As ChatGPT becomes increasingly embedded in digital infrastructures, there is an urgent need for sector-specific, evidence-driven research to evaluate its real-world implications. This section outlines key priorities in digital health, education, and the economy, followed by cross-cutting areas that span domains and raise important questions of design, governance, and sustainability.

6.1. Research Priorities in Digital Health

In the domain of digital health, future research should focus on the clinical relevance, safety, and regulatory compatibility of ChatGPT-powered tools. Studies are needed to assess the accuracy of AI-generated summaries and recommendations, especially in diverse patient populations and across different languages, health conditions, and literacy levels [159]. Investigations into how patients experience trust, comprehension, and emotional support when interacting with health chatbots will be essential, particularly in mental health and chronic care contexts. In parallel, technical research should explore privacy-preserving architectures—such as federated learning or on-device processing [160]—that enable the use of LLMs in clinical environments while ensuring compliance with legal frameworks like HIPAA and GDPR. Understanding how ChatGPT can support, rather than displace, the therapeutic relationship remains a key ethical and empirical challenge.

6.2. Research Priorities in Digital Education

In education, researchers should examine how ChatGPT is changing the nature of learning, assessment, and pedagogical design in digitally connected classrooms [21,22]. Longitudinal studies will be important to evaluate the impact of sustained AI access on student outcomes, including knowledge retention, critical thinking, and academic integrity. Further research is needed into how ChatGPT is integrated into learning management systems [161], where it may assist with personalized feedback, formative assessment, and curriculum development. Prompt engineering and instructional scaffolding techniques warrant closer study as tools for helping students learn with AI rather than simply bypassing effort [162]. Equity considerations should remain central: the effects of ChatGPT may differ substantially based on students’ educational backgrounds, digital fluency, and institutional resources.

6.3. Research Priorities in the Digital Economy

Within the economic domain, future research should focus on how ChatGPT is transforming knowledge work, labor dynamics, and organizational decision-making [163]. Studies are needed to quantify productivity impacts in fields such as customer support, marketing, and financial forecasting, and to assess the reliability and transparency of AI-generated business content. Researchers should also examine how small and medium-sized enterprises adopt ChatGPT, including the barriers they face, the benefits they derive, and the ethical concerns that arise [164,165]. The broader impact of automation on employment and professional identity is another pressing issue [29,166]: as ChatGPT alters the expectations and skill requirements of various roles, more empirical work is required to understand patterns of displacement, upskilling, and workforce stratification.

6.4. Cross-Cutting Research Needs

Beyond sector-specific concerns, several cross-cutting research areas are emerging as critical for responsible AI development. First, bias mitigation remains a central challenge [5,13]. Researchers must investigate how biases are encoded through training data, interface design, and reinforcement processes—and develop interventions to minimize harm, especially in sensitive domains. Second, explainability and transparency deserve focused attention [167], particularly in applications where users must be able to interrogate the reasoning behind AI outputs. Third, there is a growing need for research on governance and regulation, including studies on how institutions and governments can design safeguards, accountability mechanisms, and disclosure standards for ChatGPT in public-facing digital services [168]. Finally, the environmental implications of generative AI warrant urgent investigation [142,169]. The training and deployment of large models consume significant energy and water resources, raising questions about their sustainability, carbon footprint, and potential trade-offs with other social goals [170].
Meeting these research needs will require methodological diversity and interdisciplinary collaboration. Combining user-centered design, computational auditing, ethnographic fieldwork, and experimental evaluation will help ensure that insights are both robust and grounded in real-world contexts. Collaboration across fields—linking education, healthcare, law, computer science, economics, and ethics—will be essential to guide ChatGPT’s evolution in ways that are inclusive, transparent, and aligned with public values.
To support responsible AI development, future research must address both technical limitations and broader societal demands. As shown in Table 5, priority areas vary across sectors but converge around themes such as trust, bias mitigation, privacy, and sustainability. Healthcare requires validation for clinical safety and diverse user needs; education calls for studies on learning outcomes and equitable access; and the economy demands scrutiny of AI-generated decisions and data governance. Cross-cutting concerns, such as multimodal capabilities and environmental impact, highlight the need for interdisciplinary collaboration to guide ChatGPT’s evolution in complex digital environments.
Addressing these research priorities will be essential as ChatGPT becomes further embedded in the digital infrastructures of daily life. However, research alone is not sufficient. Realizing the benefits of generative AI while minimizing its risks will require coordinated efforts across sectors—uniting technological innovation with thoughtful policy, ethical reflection, and inclusive design. The concluding section reflects on these broader implications and outlines how this review contributes to ongoing debates about AI, society, and digital transformation.

7. Conclusions

7.1. Summary and Contributions

This review has examined ChatGPT’s evolution and its impact across three critical domains—digital health, education, and the economy—where the technology is actively reshaping workflows, roles, and expectations. In healthcare, ChatGPT is being used to simplify medical communication, assist with documentation, and offer mental health support. In education, it is transforming how students engage with content and how educators design assessment and feedback. In the economy, the model is increasingly integrated into business operations, customer service, and knowledge work. These sectoral shifts reflect ChatGPT’s broader role in accelerating digital transformation.
By focusing on these domains, this review makes several contributions. First, it offers a synthesized and updated overview of the current literature, helping to map out the uses, risks, and limitations of ChatGPT as it becomes embedded in key digital infrastructures. Second, it emphasizes the importance of domain-specific analysis rather than generalizing across all possible use cases—providing nuance that is often missing in broader AI commentary. Third, the review brings ethical, social, and governance considerations into focus, demonstrating that the implications of generative AI are not just technical but deeply societal [171]. Finally, by foregrounding the digital context, this work aligns with the growing need to understand AI not in isolation, but as a driver of transformation within platformized, data-intensive environments [28].

7.2. Limitations and Future Research Directions

This review is narrative in nature [10,11], and while it captures a wide range of recent literature and use cases, it does not follow a systematic review protocol. As such, the selection of sources may reflect some degree of selection bias, and the analysis remains necessarily interpretive. Moreover, given the pace of development in generative AI, some findings may become outdated quickly as new model iterations and applications emerge. The focus on ChatGPT also means that other important models—such as Gemini, Claude, or LLaMA—are not discussed in detail, though many of the insights presented here may extend to those systems as well.
Future research should address these limitations by employing more systematic methods, such as meta-analyses or empirical user studies, to test and validate the claims that currently rest on emerging or exploratory evidence. There is also a need for longitudinal research that tracks the evolving use and societal effects of ChatGPT over time. Beyond method, future work should continue to explore the intersection of generative AI with regulatory design, institutional governance, and platform-specific integration [172]. Researchers should also pay closer attention to equity and sustainability, ensuring that access to generative AI technologies does not reproduce or deepen existing social divides, and that the environmental costs of scaling AI are better understood and managed [173].
In sum, while ChatGPT holds significant potential to enhance access, productivity, and creativity across sectors, its responsible use will depend on the coordinated efforts of developers, educators, healthcare professionals, policymakers, and researchers. By identifying emerging patterns, ethical tensions, and open questions, this review contributes to a growing body of critical scholarship aimed at understanding and guiding the future of generative AI.

Author Contributions

Conceptualization, D.Ø.M. and D.M.T.II; validation, D.Ø.M. and D.M.T.II; investigation, D.Ø.M. and D.M.T.II; resources, D.Ø.M. and D.M.T.II; writing—original draft preparation, D.Ø.M.; writing—review and editing, D.Ø.M. and D.M.T.II; visualization, D.Ø.M.; project administration, D.Ø.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

During the preparation of this work, the authors used ChatGPT Plus and Grammarly Premium to improve language, structure, and readability. After using this tool, the authors reviewed and edited the content as needed. The authors take full responsibility for the content of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhao, W.X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.; Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z. A survey of large language models. arXiv 2023, arXiv:2303.18223. [Google Scholar]
  2. Fan, L.; Li, L.; Ma, Z.; Lee, S.; Yu, H.; Hemphill, L. A bibliometric review of large language models research from 2017 to 2023. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–25. [Google Scholar] [CrossRef]
  3. Ciesla, R. The Book of Chatbots: From ELIZA to ChatGPT; Springer Nature: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
  4. Taecharungroj, V. What Can ChatGPT Do? Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn. Comput. 2023, 7, 35. [Google Scholar] [CrossRef]
  5. Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys. Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
  6. Gill, S.S.; Kaur, R. ChatGPT: Vision and challenges. Internet Things Cyber-Phys. Syst. 2023, 3, 262–271. [Google Scholar] [CrossRef]
  7. Gupta, B.; Mufti, T.; Sohail, S.S.; Madsen, D.Ø. ChatGPT: A brief narrative review. Cogent Bus. Manag. 2023, 10, 2275851. [Google Scholar] [CrossRef]
  8. Marr, B. A short history of ChatGPT: How we got to where we are today. Forbes 2023, 5, 19. [Google Scholar]
  9. Nazir, A.; Wang, Z. A comprehensive survey of ChatGPT: Advancements, applications, prospects, and challenges. Meta-Radiol. 2023, 1, 100022. [Google Scholar] [CrossRef]
  10. Baumeister, R.F.; Leary, M.R. Writing narrative literature reviews. Rev. Gen. Psychol. 1997, 1, 311. [Google Scholar] [CrossRef]
  11. Ferrari, R. Writing narrative style literature reviews. Med. Writ. 2015, 24, 230–235. [Google Scholar] [CrossRef]
  12. Sallam, M. ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef] [PubMed]
  13. Sohail, S.S.; Farhat, F.; Himeur, Y.; Nadeem, M.; Madsen, D.Ø.; Singh, Y.; Atalla, S.; Mansoor, W. Decoding ChatGPT: A Taxonomy of Existing Research, Current Challenges, and Possible Future Directions. J. King Saud Univ.—Comput. Inf. Sci. 2023, 35, 101675. [Google Scholar] [CrossRef]
  14. Farhat, F.; Silva, E.S.; Hassani, H.; Madsen, D.Ø.; Sohail, S.S.; Himeur, Y.; Alam, M.A.; Zafar, A. The scholarly footprint of ChatGPT: A bibliometric analysis of the early outbreak phase. Front. Artif. Intell. 2024, 6, 1270749. [Google Scholar] [CrossRef] [PubMed]
  15. Yenduri, G.; Ramalingam, M.; Selvi, G.C.; Supriya, Y.; Srivastava, G.; Maddikunta, P.K.R.; Raj, G.D.; Jhaveri, R.H.; Prabadevi, B.; Wang, W.; et al. GPT (Generative Pre-Trained Transformer)— A Comprehensive Review on Enabling Technologies, Potential Applications, Emerging Challenges, and Future Directions. IEEE Access 2024, 12, 54608–54649. [Google Scholar] [CrossRef]
  16. Polat, H.; Topuz, A.C.; Yıldız, M.; Taşlıbeyaz, E.; Kurşun, E. A Bibliometric Analysis of Research on ChatGPT in Education. Int. J. Technol. Educ. 2024, 7, 59–85. [Google Scholar] [CrossRef]
  17. Koo, M. ChatGPT Research: A Bibliometric Analysis Based on the Web of Science from 2023 to June 2024. Knowledge 2025, 5, 4. [Google Scholar] [CrossRef]
  18. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2023, 61, 460–474. [Google Scholar] [CrossRef]
  19. Mienye, I.D.; Swart, T.G. ChatGPT in Education: A Review of Ethical Challenges and Approaches to Enhancing Transparency and Privacy. Procedia Comput. Sci. 2025, 254, 181–190. [Google Scholar] [CrossRef]
  20. Li, J.; Dada, A.; Puladi, B.; Kleesiek, J.; Egger, J. ChatGPT in healthcare: A taxonomy and systematic review. Comput. Methods Programs Biomed. 2024, 245, 108013. [Google Scholar] [CrossRef]
  21. Lo, C.K. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  22. Albadarin, Y.; Saqr, M.; Pope, N.; Tukiainen, M. A systematic literature review of empirical research on ChatGPT in education. Discov. Educ. 2024, 3, 60. [Google Scholar] [CrossRef]
  23. Oliński, M.; Krukowski, K.; Sieciński, K. Bibliometric Overview of ChatGPT: New Perspectives in Social Sciences. Publications 2024, 12, 9. [Google Scholar] [CrossRef]
  24. Gande, S.; Gould, M.; Ganti, L. Bibliometric analysis of ChatGPT in medicine. Int. J. Emerg. Med. 2024, 17, 50. [Google Scholar] [CrossRef]
  25. Cong-Lem, N.; Soyoof, A.; Tsering, D. A systematic review of the limitations and associated opportunities of ChatGPT. Int. J. Hum.–Comput. Interact. 2025, 41, 3851–3866. [Google Scholar] [CrossRef]
  26. Ali, D.; Fatemi, Y.; Boskabadi, E.; Nikfar, M.; Ugwuoke, J.; Ali, H. ChatGPT in teaching and learning: A systematic review. Educ. Sci. 2024, 14, 643. [Google Scholar] [CrossRef]
  27. Nan, D.; Zhao, X.; Chen, C.; Sun, S.; Lee, K.R.; Kim, J.H. Bibliometric Analysis on ChatGPT Research with CiteSpace. Information 2025, 16, 38. [Google Scholar] [CrossRef]
  28. Van Dijck, J.; Poell, T.; De Waal, M. The Platform Society: Public Values in a Connective World; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  29. Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; WW Norton & Company: New York, NY, USA, 2014. [Google Scholar]
  30. Sadowski, J. When data is capital: Datafication, accumulation, and extraction. Big Data Soc. 2019, 6, 2053951718820549. [Google Scholar] [CrossRef]
  31. Ruckenstein, M.; Schüll, N.D. The Datafication of Health. Annu. Rev. Anthropol. 2017, 46, 261–278. [Google Scholar] [CrossRef]
  32. Wajcman, J. Automation: Is it really different this time? Br. J. Sociol. 2017, 68, 119–127. [Google Scholar] [CrossRef]
  33. Humlum, A.; Vestergaard, E. The Adoption of ChatGPT; University of Chicago, Becker Friedman Institute for Economics Working Paper; Institute of Labor Economics (IZA): Bonn, Germany, 2024. [Google Scholar]
  34. McGeorge, D. The ChatGPT Revolution: How to Simplify Your Work and Life Admin with AI.; John Wiley & Sons: Hoboken, NJ, USA, 2023. [Google Scholar]
  35. Siddiqui, Z.H.; Azeez, M.A.; Sohail, S.S.; Ahmed, J.; Madsen, D.Ø. A preliminary exploration of ChatGPT’s potential in medical reasoning and patient care. Crit. Public Health 2025, 35. [Google Scholar] [CrossRef]
  36. Wolf, V.; Maier, C. ChatGPT usage in everyday life: A motivation-theoretic mixed-methods study. Int. J. Inf. Manag. 2024, 79, 102821. [Google Scholar] [CrossRef]
  37. Sohail, S.S. A Promising Start and Not a Panacea: ChatGPT’s Early Impact and Potential in Medical Science and Biomedical Engineering Research. Ann. Biomed. Eng. 2023, 52, 1131–1135. [Google Scholar] [CrossRef]
  38. Watson, S.; Romic, J. ChatGPT and the entangled evolution of society, education, and technology: A systems theory perspective. Eur. Educ. Res. J. 2025, 24, 205–224. [Google Scholar] [CrossRef]
  39. Sumbal, M.S.; Amber, Q. ChatGPT: A game changer for knowledge management in organizations. Kybernetes 2025, 54, 3217–3237. [Google Scholar] [CrossRef]
  40. Wood, D.A. Rewiring Your Mind for AI: How to Think, Work, and Thrive in the Age of Intelligence; Technics Publications, LLC: Sedona, AZ, USA, 2025. [Google Scholar]
  41. Raj, R.; Singh, A.; Kumar, V.; Verma, P. Analyzing the potential benefits and use cases of ChatGPT as a tool for improving the efficiency and effectiveness of business operations. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100140. [Google Scholar] [CrossRef]
  42. George, A.S.; George, A.H. A Review of ChatGPT AI’s Impact on Several Business Sectors. Partn. Univers. Int. Innov. J. 2023, 1, 9–23. [Google Scholar]
  43. Singh, H.; Singh, A. ChatGPT: Systematic Review, Applications, and Agenda for Multidisciplinary Research. J. Chin. Econ. Bus. Stud. 2023, 21, 193–212. [Google Scholar] [CrossRef]
  44. Delellis, N.S.; Chen, Y.; Cornwell, S.E.; Kelly, D.; Mayhew, A.; Onaolapo, S.; Rubin, V.L. ChatGPT Media Coverage Metrics; Initial Examination. Proc. Assoc. Inf. Sci. Technol. 2023, 60, 935–937. [Google Scholar] [CrossRef]
  45. Karanouh, M. Mapping ChatGPT in Mainstream Media: Early Quantitative Insights Through Sentiment Analysis and Word Frequency Analysis. arXiv 2023, arXiv:2305.18340. [Google Scholar]
  46. Leiter, C.; Zhang, R.; Chen, Y.; Belouadi, J.; Larionov, D.; Fresen, V.; Eger, S. ChatGPT: A Meta-Analysis After 2.5 Months. arXiv 2023, arXiv:2302.13795. [Google Scholar] [CrossRef]
  47. Silva, E.; Madsen, D.Ø. Google Trends. In Encyclopedia of Tourism Management and Marketing; Buhalis, D., Ed.; Edward Elgar: Northampton, MA, USA, 2022. [Google Scholar]
  48. Jun, S.-P.; Yoo, H.S.; Choi, S. Ten Years of Research Change Using Google Trends: From the Perspective of Big Data Utilizations and Applications. Technol. Forecast. Soc. Change 2018, 130, 69–87. [Google Scholar] [CrossRef]
  49. David, E. OpenAI Finally Brings Humanlike ChatGPT Advanced Voice Mode to U.S. Plus, Team Users; VentureBeat: San Francisco, CA, USA, 2024. [Google Scholar]
  50. Zheng, Y.; Wang, L.; Feng, B.; Zhao, A.; Wu, Y. Innovating Healthcare: The Role of ChatGPT in Streamlining Hospital Workflow in the Future. Ann. Biomed. Eng. 2024, 52, 750–753. [Google Scholar] [CrossRef] [PubMed]
  51. Javaid, M.; Haleem, A.; Singh, R.P. ChatGPT for healthcare services: An emerging stage for an innovative perspective. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100105. [Google Scholar] [CrossRef]
  52. Youssef, E.R.; Meriem, M.; Ait-Lemqeddem, H. ChatGPT and ethics in healthcare facilities: An overview and innovations in technical efficiency analysis. AI Ethics 2025. [Google Scholar] [CrossRef]
  53. Firat, M. What ChatGPT means for universities: Perceptions of scholars and students. J. Appl. Learn. Teach. 2023, 6, 57–63. [Google Scholar]
  54. Fesenmaier, D.R.; Wöber, K. AI, ChatGPT and the university. Ann. Tour. Res. 2023, 101, 103578. [Google Scholar] [CrossRef]
  55. Kiryakova, G.; Angelova, N. ChatGPT—A challenging tool for the university professors in their teaching practice. Educ. Sci. 2023, 13, 1056. [Google Scholar] [CrossRef]
  56. Rasul, T.; Nair, S.; Kalendra, D.; Robin, M.; de Oliveira Santini, F.; Ladeira, W.J.; Sun, M.; Day, I.; Rather, R.A.; Heathcote, L. The role of ChatGPT in higher education: Benefits, challenges, and future research directions. J. Appl. Learn. Teach. 2023, 6, 41–56. [Google Scholar]
  57. Wang, H.; Dang, A.; Wu, Z.; Mac, S. Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Comput. Educ. Artif. Intell. 2024, 7, 100326. [Google Scholar] [CrossRef]
  58. Sliż, P. The Role of ChatGPT in Elevating Customer Experience and Efficiency in Automotive After-Sales Business Processes. Appl. Syst. Innov. 2024, 7, 29. [Google Scholar] [CrossRef]
  59. Carvalho, I.; Ivanov, S. ChatGPT for tourism: Applications, benefits and risks. Tour. Rev. 2023, 79, 290–303. [Google Scholar] [CrossRef]
  60. Hassani, H.; Silva, E.S. Large language models as benchmarks in forecasting practice. Foresight Int. J. Appl. Forecast. 2024, 75, 5–10. [Google Scholar]
  61. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  62. Rudolph, J.; Tan, S.; Tan, S. War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. J. Appl. Learn. Teach. 2023, 6, 364–389. [Google Scholar]
  63. Ivanov, S.; Soliman, M. Game of algorithms: ChatGPT implications for the future of tourism education and research. J. Tour. Futures 2023, 9, 214–221. [Google Scholar] [CrossRef]
  64. Altinay, Z.; Altinay, F.; Tlili, A.; Vatankhah, S. “Keep your friends close, but your enemies closer:“ ChatGPT in tourism and hospitality. J. Hosp. Tour. Technol. 2024, 16, 213–228. [Google Scholar] [CrossRef]
  65. Dwivedi, Y.K.; Pandey, N.; Currie, W.; Micu, A. Leveraging ChatGPT and other generative artificial intelligence (AI)-based applications in the hospitality and tourism industry: Practices, challenges and research agenda. Int. J. Contemp. Hosp. Manag. 2023, 36, 1–12. [Google Scholar] [CrossRef]
  66. Vaillant, T.S.; de Almeida, F.D.; Neto, P.A.; Gao, C.; Bosch, J.; de Almeida, E.S. Developers’ Perceptions on the Impact of ChatGPT in Software Development: A Survey. arXiv 2024, arXiv:2405.12195. [Google Scholar]
  67. Rajbhoj, A.; Somase, A.; Kulkarni, P.; Kulkarni, V. Accelerating Software Development Using Generative AI: ChatGPT Case Study. In Proceedings of the 17th Innovations in Software Engineering Conference, Bangalore, India, 22–24 February 2024; pp. 1–11. [Google Scholar]
  68. Tian, H.; Lu, W.; Li, T.O.; Tang, X.; Cheung, S.-C.; Klein, J.; Bissyandé, T.F. Is ChatGPT the ultimate programming assistant—How far is it? arXiv 2023, arXiv:2304.11938. [Google Scholar]
  69. Chu, H.; Liu, S. Can AI tell good stories? Narrative transportation and persuasion with ChatGPT. J. Commun. 2024, 74, 347–358. [Google Scholar] [CrossRef]
  70. Huang, J.; Huang, K. ChatGPT in Gaming Industry. In Beyond AI: ChatGPT, Web3, and the Business Landscape of Tomorrow; Huang, K., Wang, Y., Zhu, F., Chen, X., Xing, C., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 243–269. [Google Scholar] [CrossRef]
  71. Tyni, J.; Turunen, A.; Kahila, J.; Bednarik, R.; Tedre, M. Can ChatGPT Match the Experts? A Feedback Comparison for Serious Game Development. Int. J. Serious Games 2024, 11, 87–106. [Google Scholar] [CrossRef]
  72. Zhang, P.; Kamel Boulos, M.N. Generative AI in medicine and healthcare: Promises, opportunities and challenges. Future Internet 2023, 15, 286. [Google Scholar] [CrossRef]
  73. Hopkins, A.M.; Logan, J.M.; Kichenadasse, G.; Sorich, M.J. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr. 2023, 7, pkad010. [Google Scholar] [CrossRef] [PubMed]
  74. Dergaa, I.; Fekih-Romdhane, F.; Hallit, S.; Loch, A.A.; Glenn, J.M.; Fessi, M.S.; Ben Aissa, M.; Souissi, N.; Guelmami, N.; Swed, S. ChatGPT is not ready yet for use in providing mental health assessment and interventions. Front. Psychiatry 2024, 14, 1277756. [Google Scholar] [CrossRef] [PubMed]
  75. Farhat, F. ChatGPT as a Complementary Mental Health Resource: A Boon or a Bane. Ann. Biomed. Eng. 2023, 52, 1111–1114. [Google Scholar] [CrossRef]
  76. Kalam, K.T.; Rahman, J.M.; Islam, M.R.; Dewan, S.M.R. ChatGPT and mental health: Friends or foes? Health Sci. Rep. 2024, 7, e1912. [Google Scholar] [CrossRef]
  77. Pandya, A.; Lodha, P.; Ganatra, A. Is ChatGPT ready to change mental healthcare? Challenges and considerations: A reality-check. Front. Hum. Dyn. 2024, 5, 1289255. [Google Scholar] [CrossRef]
  78. Raile, P. The usefulness of ChatGPT for psychotherapists and patients. Humanit. Soc. Sci. Commun. 2024, 11, 47. [Google Scholar] [CrossRef]
  79. Adeshola, I.; Adepoju, A.P. The opportunities and challenges of ChatGPT in education. Interact. Learn. Environ. 2023, 32, 6159–6172. [Google Scholar] [CrossRef]
  80. Jiang, Y.; Xie, L.; Lin, G.; Mo, F. Widen the debate: What is the academic community’s perception on ChatGPT? Educ. Inf. Technol. 2024, 29, 20181–20200. [Google Scholar] [CrossRef]
  81. Li, L.; Ma, Z.; Fan, L.; Lee, S.; Yu, H.; Hemphill, L. ChatGPT in education: A discourse analysis of worries and concerns on social media. Educ. Inf. Technol. 2024, 29, 10729–10762. [Google Scholar] [CrossRef]
  82. Gill, S.S.; Xu, M.; Patros, P.; Wu, H.; Kaur, R.; Kaur, K.; Fuller, S.; Singh, M.; Arora, P.; Parlikad, A.K.; et al. Transformative effects of ChatGPT on modern education: Emerging Era of AI Chatbots. Internet Things Cyber-Phys. Syst. 2023, 4, 19–23. [Google Scholar] [CrossRef]
  83. van den Berg, G.; du Plessis, E. ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Educ. Sci. 2023, 13, 998. [Google Scholar] [CrossRef]
  84. Powell, W.; Courchesne, S. Opportunities and risks involved in using ChatGPT to create first grade science lesson plans. PLoS ONE 2024, 19, e0305337. [Google Scholar] [CrossRef] [PubMed]
  85. Memarian, B.; Doleck, T. ChatGPT in education: Methods, potentials and limitations. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100022. [Google Scholar] [CrossRef]
  86. Bui, N.M.; Barrot, J.S. ChatGPT as an automated essay scoring tool in the writing classrooms: How it compares with human scoring. Educ. Inf. Technol. 2024, 30, 2041–2058. [Google Scholar] [CrossRef]
  87. Martha, A.S.D.; Widowati, S.; Rahayu, D.P.; Nursyawal, M.I.; Hariz, R.R. Examining Usability and Student Experience of ChatGPT as a Learning Tool on Moodle in Higher Education. In Proceedings of the 2024 International Conference on Information Technology Systems and Innovation (ICITSI), Bandung, Indonesia, 12 December 2024; pp. 8–13. [Google Scholar]
  88. Alotaibi, N.S. The Impact of AI and LMS Integration on the Future of Higher Education: Opportunities, Challenges, and Strategies for Transformation. Sustainability 2024, 16, 10357. [Google Scholar] [CrossRef]
  89. Su, J.; Yang, W. Powerful or mediocre? Kindergarten teachers’ perspectives on using ChatGPT in early childhood education. Interact. Learn. Environ. 2023, 32, 6496–6508. [Google Scholar] [CrossRef]
  90. Chauncey, S.A.; McKenna, H.P. A framework and exemplars for ethical and responsible use of AI Chatbot technology to support teaching and learning. Comput. Educ. Artif. Intell. 2023, 5, 100182. [Google Scholar] [CrossRef]
  91. Dempere, J.; Modugu, K.; Hesham, A.; Ramasamy, L.K. The impact of ChatGPT on higher education. Front. Educ. 2023, 8, 1206936. [Google Scholar] [CrossRef]
  92. Jauhiainen, J.S.; Guerra, A.G. Generative AI and ChatGPT in school children’s education: Evidence from a school lesson. Sustainability 2023, 15, 14025. [Google Scholar] [CrossRef]
  93. Sok, S.; Heng, K. Opportunities, challenges, and strategies for using ChatGPT in higher education: A literature review. J. Digit. Educ. Technol. 2024, 4, ep2401. [Google Scholar] [CrossRef]
  94. Uğraş, H.; Uğraş, M.; Papadakis, S.; Kalogiannakis, M. ChatGPT-Supported Education in Primary Schools: The Potential of ChatGPT for Sustainable Practices. Sustainability 2024, 16, 9855. [Google Scholar] [CrossRef]
  95. Rakap, S. Chatting with GPT: Enhancing individualized education program goal development for novice special education teachers. J. Spec. Educ. Technol. 2024, 39, 339–348. [Google Scholar] [CrossRef]
  96. AlSagri, H.S.; Farhat, F.; Sohail, S.S.; Saudagar, A.K.J. ChatGPT or Gemini: Who Makes the Better Scientific Writing Assistant? J. Acad. Ethics 2024, 1–15. [Google Scholar] [CrossRef]
  97. Garg, S.; Ahmad, A.; Madsen, D.Ø. Academic writing in the age of AI: Comparing the reliability of ChatGPT and Bard with Scopus and Web of Science. J. Innov. Knowl. 2024, 9, 100563. [Google Scholar] [CrossRef]
  98. Cheng, A.; Calhoun, A.; Reedy, G. Artificial intelligence-assisted academic writing: Recommendations for ethical use. Adv. Simul. 2025, 10, 22. [Google Scholar] [CrossRef]
  99. de Fine Licht, K. Generative artificial intelligence in higher education: Why the’banning approach’to student use is sometimes morally justified. Philos. Technol. 2024, 37, 113. [Google Scholar] [CrossRef]
  100. Stone, B.W. Generative AI in Higher Education: Uncertain Students, Ambiguous Use Cases, and Mercenary Perspectives. Teach. Psychol. 2024, 52, 347–356. [Google Scholar] [CrossRef]
  101. Francis, N.J.; Jones, S.; Smith, D.P. Generative AI in Higher Education: Balancing Innovation and Integrity. Br. J. Biomed. Sci. 2025, 81, 14048. [Google Scholar] [CrossRef]
  102. Tzirides, A.O.O.; Zapata, G.; Kastania, N.P.; Saini, A.K.; Castro, V.; Ismael, S.A.; You, Y.-l.; dos Santos, T.A.; Searsmith, D.; O’Brien, C. Combining human and artificial intelligence for enhanced AI literacy in higher education. Comput. Educ. Open 2024, 6, 100184. [Google Scholar] [CrossRef]
  103. Eng, J.; Umphlett, H.; Gilchrist, J.; Howell, A.; Howell, M.A.; Miller-Edwards, W.; Pope, L. AI Beliefs and Practices in Community College Classrooms. Inq. J. Va. Community Coll. 2025, 28, 8. [Google Scholar]
  104. Eke, D.O. ChatGPT and the rise of generative AI: Threat to academic integrity? J. Responsible Technol. 2023, 13, 100060. [Google Scholar] [CrossRef]
  105. Abeysekera, I. ChatGPT and academia on accounting assessments. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100213. [Google Scholar] [CrossRef]
  106. Al Naqbi, H.; Bahroun, Z.; Ahmed, V. Enhancing work productivity through generative artificial intelligence: A comprehensive literature review. Sustainability 2024, 16, 1166. [Google Scholar] [CrossRef]
  107. Noy, S.; Zhang, W. Experimental evidence on the productivity effects of generative artificial intelligence. Science 2023, 381, 187–192. [Google Scholar] [CrossRef] [PubMed]
  108. Ayinde, L.; Wibowo, M.P.; Ravuri, B.; Emdad, F.B. ChatGPT as an important tool in organizational management: A review of the literature. Bus. Inf. Rev. 2023, 40, 137–149. [Google Scholar] [CrossRef]
  109. Arman, M.; Lamiya, U.R. ChatGPT, a Product of AI, and its Influences in the Business World. Talaa J. Islam. Financ. 2023, 3, 18–37. [Google Scholar] [CrossRef]
  110. Jassem, S.; Al Balushi, W. ChatGPT and Implications for the Banking and Financial Industry: New Horizons of Opportunities and Potential Perils. In The ChatGPT Revolution; Behl, A., Krishnan, C., Malik, P., Gautam, S., Eds.; Emerald Publishing Limited: Leeds, UK, 2025; pp. 183–202. [Google Scholar] [CrossRef]
  111. Sigala, M.; Ooi, K.-B.; Tan, G.W.-H.; Aw, E.C.-X.; Cham, T.-H.; Dwivedi, Y.K.; Kunz, W.H.; Letheren, K.; Mishra, A.; Russell-Bennett, R. ChatGPT and service: Opportunities, challenges, and research directions. J. Serv. Theory Pract. 2024, 34, 726–737. [Google Scholar] [CrossRef]
  112. Lopez-Lira, A. The Predictive Edge: Outsmart the Market Using Generative AI and ChatGPT in Financial Forecasting; John Wiley & Sons: Hoboken, NJ, USA, 2024. [Google Scholar]
  113. Hassani, H.; Silva, E.S. The Role of ChatGPT in Data Science: How AI-Assisted Conversational Interfaces Are Revolutionizing the Field. Big Data Cogn. Comput. 2023, 7, 62. [Google Scholar] [CrossRef]
  114. Hassani, H.; Silva, E.S. Predictions from Generative Artificial Intelligence Models: Towards a New Benchmark in Forecasting Practice. Information 2024, 15, 291. [Google Scholar] [CrossRef]
  115. Smith, G.K. Strategic Integration of Generative AI: Opportunities, Challenges, and Organizational Impacts. Law Econ. Soc. 2025, 1, p156. [Google Scholar] [CrossRef]
  116. Zarifhonarvar, A. Economics of chatgpt: A labor market view on the occupational impact of artificial intelligence. J. Electron. Bus. Digit. Econ. 2024, 3, 100–116. [Google Scholar] [CrossRef]
  117. Stahl, B.C.; Eke, D. The ethics of ChatGPT–Exploring the ethical issues of an emerging technology. Int. J. Inf. Manag. 2024, 74, 102700. [Google Scholar] [CrossRef]
  118. Zhou, J.; Müller, H.; Holzinger, A.; Chen, F. Ethical ChatGPT: Concerns, challenges, and commandments. Electronics 2024, 13, 3417. [Google Scholar] [CrossRef]
  119. Wu, X.; Duan, R.; Ni, J. Unveiling security, privacy, and ethical concerns of ChatGPT. J. Inf. Intell. 2024, 2, 102–115. [Google Scholar] [CrossRef]
  120. Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical considerations of using ChatGPT in health care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef]
  121. Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef] [PubMed]
  122. Huang, K.; Zhang, F.; Li, Y.; Wright, S.; Kidambi, V.; Manral, V. Security and Privacy Concerns in ChatGPT. In Beyond AI: ChatGPT, Web3, and the Business Landscape of Tomorrow; Huang, K., Wang, Y., Zhu, F., Chen, X., Xing, C., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 297–328. [Google Scholar] [CrossRef]
  123. Alder, S. Is ChatGPT HIPAA Compliant? The HIPAA Journal: Lansing, MI, USA, 2025. [Google Scholar]
  124. Adel, A.; Ahsan, A.; Davison, C. ChatGPT promises and challenges in education: Computational and ethical perspectives. Educ. Sci. 2024, 14, 814. [Google Scholar] [CrossRef]
  125. Ly, R.; Ly, B. Ethical challenges and opportunities in ChatGPT integration for education: Insights from emerging economy. AI Ethics 2025, 1–18. [Google Scholar] [CrossRef]
  126. Loos, E.; Radicke, J. Using ChatGPT-3 as a writing tool: An educational assistant or a moral hazard? Current ChatGPT-3 media representations compared to Plato’s critical stance on writing in Phaedrus. AI Ethics 2024, 5, 1133–1146. [Google Scholar] [CrossRef]
  127. Loos, E.; Gröpler, J.; Goudeau, M.-L.S. Using ChatGPT in education: Human reflection on ChatGPT’s self-reflection. Societies 2023, 13, 196. [Google Scholar] [CrossRef]
  128. Puyt, R.W.; Madsen, D.Ø. Evaluating ChatGPT-4’s historical accuracy: A case study on the origins of SWOT analysis. Front. Artif. Intell. 2024, 7, 1402047. [Google Scholar] [CrossRef] [PubMed]
  129. Lythreatis, S.; Singh, S.K.; El-Kassar, A.-N. The digital divide: A review and future research agenda. Technol. Forecast. Soc. Change 2022, 175, 121359. [Google Scholar] [CrossRef]
  130. Leboukh, F.; Aduku, E.B.; Ali, O. Balancing ChatGPT and data protection in Germany: Challenges and opportunities for policy makers. J. Politics Ethics New Technol. AI 2023, 2, e35166. [Google Scholar] [CrossRef]
  131. Pleshakova, E.; Osipov, A.; Gataullin, S.; Gataullin, T.; Vasilakos, A. Next gen cybersecurity paradigm towards artificial general intelligence: Russian market challenges and future global technological trends. J. Comput. Virol. Hacking Tech. 2024, 20, 429–440. [Google Scholar] [CrossRef]
  132. Alkaissi, H.; McFarlane, S.I. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 2023, 15, e35179. [Google Scholar] [CrossRef]
  133. Azamfirei, R.; Kudchadkar, S.R.; Fackler, J. Large language models and the perils of their hallucinations. Crit. Care 2023, 27, 120. [Google Scholar] [CrossRef]
  134. Farhat, F.; Sohail, S.S.; Madsen, D.Ø. How trustworthy is ChatGPT? The case of bibliometric analyses. Cogent Eng. 2023, 10, 2222988. [Google Scholar] [CrossRef]
  135. Chelli, M.; Descamps, J.; Lavoué, V.; Trojani, C.; Azar, M.; Deckert, M.; Raynier, J.-L.; Clowez, G.; Boileau, P.; Ruetsch-Chelli, C. Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis. J. Med. Internet Res. 2024, 26, e53164. [Google Scholar] [CrossRef]
  136. Metze, K.; Morandin-Reis, R.C.; Lorand-Metze, I.; Florindo, J.B. Bibliographic research with ChatGPT may be misleading: The problem of hallucination. J. Pediatr. Surg. 2024, 59, 158. [Google Scholar] [CrossRef]
  137. Giuffrè, M.; You, K.; Shung, D.L. Evaluating ChatGPT in Medical Contexts: The Imperative to Guard Against Hallucinations and Partial Accuracies. Clin. Gastroenterol. Hepatol. 2024, 22, 1145–1146. [Google Scholar] [CrossRef]
  138. Emsley, R. ChatGPT: These are not hallucinations–they’re fabrications and falsifications. Schizophrenia 2023, 9, 52. [Google Scholar] [CrossRef] [PubMed]
  139. Goddard, J. Hallucinations in ChatGPT: A Cautionary Tale for Biomedical Researchers. Am. J. Med. 2023, 136, 1059–1060. [Google Scholar] [CrossRef] [PubMed]
  140. Yaprak, B. Generative Artificial Intelligence in Marketing: The Invisible Danger of AI Hallucinations. Ekon. İşletme Yönetim Dergisi 2024, 8, 133–158. [Google Scholar] [CrossRef]
  141. George, A.S.; George, A.H.; Martin, A.G. The environmental impact of ai: A case study of water consumption by chat gpt. Partn. Univers. Int. Innov. J. 2023, 1, 97–104. [Google Scholar]
  142. Haque, M.A.; Li, S. Exploring ChatGPT and its impact on society. AI Ethics 2024, 5, 791–803. [Google Scholar] [CrossRef]
  143. Khowaja, S.A.; Khuwaja, P.; Dev, K.; Wang, W.; Nkenyereye, L. ChatGPT Needs SPADE (Sustainability, PrivAcy, Digital Divide, and Ethics) Evaluation: A Review. Cogn. Comput. 2024, 16, 2528–2550. [Google Scholar] [CrossRef]
  144. De Vries, A. The growing energy footprint of artificial intelligence. Joule 2023, 7, 2191–2194. [Google Scholar] [CrossRef]
  145. Baldassarre, M.T.; Caivano, D.; Fernandez Nieto, B.; Gigante, D.; Ragone, A. The social impact of generative ai: An analysis on chatgpt. In Proceedings of the 2023 ACM Conference on Information Technology for Social Good, Lisbon, Portugal, 6–8 September 2023; pp. 363–373. [Google Scholar]
  146. Miao, H.; Li, C.; Wang, J. A future of smarter digital health empowered by generative pretrained transformer. J. Med. Internet Res. 2023, 25, e49963. [Google Scholar] [CrossRef]
  147. Temsah, M.-H.; Aljamaan, F.; Malki, K.H.; Alhasan, K.; Altamimi, I.; Aljarbou, R.; Bazuhair, F.; Alsubaihin, A.; Abdulmajeed, N.; Alshahrani, F.S. ChatGPT and the future of digital health: A study on healthcare workers’ perceptions and expectations. Healthcare 2023, 11, 1812. [Google Scholar] [CrossRef]
  148. Levine, S.; Beck, S.W.; Mah, C.; Phalen, L.; PIttman, J. How do students use ChatGPT as a writing support? J. Adolesc. Adult Lit. 2025, 68, 445–457. [Google Scholar] [CrossRef]
  149. Farhi, F.; Jeljeli, R.; Aburezeq, I.; Dweikat, F.F.; Al-shami, S.A.; Slamene, R. Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Comput. Educ. Artif. Intell. 2023, 5, 100180. [Google Scholar] [CrossRef]
  150. Lo, C.K.; Hew, K.F.; Jong, M.S.-y. The influence of ChatGPT on student engagement: A systematic review and future research agenda. Comput. Educ. 2024, 219, 105100. [Google Scholar] [CrossRef]
  151. Berry, D.M. The limits of computation: Joseph Weizenbaum and the ELIZA chatbot. Weizenbaum J. Digit. Soc. 2023, 3. [Google Scholar] [CrossRef]
  152. Rajaraman, V. From ELIZA to ChatGPT. Resonance 2023, 28, 889–905. [Google Scholar] [CrossRef]
  153. Santiago-Ruiz, E. Writing with ChatGPT in a Context of Educational Inequality and Digital Divide. Int. J. Educ. Dev. Using Inf. Commun. Technol. 2023, 19, 28–38. [Google Scholar]
  154. Zhang, C.; Rice, R.E.; Wang, L.H. College students’ literacy, ChatGPT activities, educational outcomes, and trust from a digital divide perspective. New Media Soc. 2024, 14614448241301741. [Google Scholar] [CrossRef]
  155. Sebastian, G.; Sebastian, S.R. Exploring ethical implications of ChatGPT and other AI chatbots and regulation of disinformation propagation. Ann. Eng. Math. Comput. Intell. 2024, 1, 1–12. [Google Scholar] [CrossRef]
  156. Monteith, S.; Glenn, T.; Geddes, J.R.; Whybrow, P.C.; Achtyes, E.; Bauer, M. Artificial intelligence and increasing misinformation. Br. J. Psychiatry 2024, 224, 33–35. [Google Scholar] [CrossRef]
  157. Al-kfairy, M.; Mustafa, D.; Kshetri, N.; Insiew, M.; Alfandi, O. Ethical challenges and solutions of generative AI: An interdisciplinary perspective. Informatics 2024, 11, 58. [Google Scholar] [CrossRef]
  158. Prem, E. From ethical AI frameworks to tools: A review of approaches. AI Ethics 2023, 3, 699–716. [Google Scholar] [CrossRef]
  159. He, J.; Baxter, S.L.; Xu, J.; Xu, J.; Zhou, X.; Zhang, K. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 2019, 25, 30–36. [Google Scholar] [CrossRef]
  160. Rieke, N.; Hancox, J.; Li, W.; Milletari, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K. The future of digital health with federated learning. NPJ Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef] [PubMed]
  161. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  162. Luckin, R.; Holmes, W. Intelligence Unleashed: An Argument for AI in Education; UCL Knowledge Lab: London, UK, 2016. [Google Scholar]
  163. Eloundou, T.; Manning, S.; Mishkin, P.; Rock, D. GPTs are GPTs: Labor market impact potential of LLMs. Science 2024, 384, 1306–1308. [Google Scholar] [CrossRef] [PubMed]
  164. Shafik, W. Emerging Technologies for Small and Medium Enterprises (SMEs) Growth: ChatGPT, Blockchain, Robotics, and Artificial Intelligence. In Fostering Economic Diversification and Sustainable Business Through Digital Intelligence; Pujari, P., Khan, S.A., Kumar, A., Naim, A., Eds.; IGI Global: Hershey, PA, USA, 2025; pp. 173–196. [Google Scholar] [CrossRef]
  165. Walke, F.; Klopfers, L.; Winkler, T.J. Leveraging Generative AI and ChatGPT in SMEs: A Grounded Model. In Proceedings of the 58th Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 7–10 January 2025. [Google Scholar]
  166. Acemoglu, D.; Restrepo, P. Artificial intelligence, automation, and work. In The Economics of Artificial Intelligence: An Agenda; University of Chicago Press: Chicago, IL, USA, 2018; pp. 197–236. [Google Scholar]
  167. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
  168. Hacker, P.; Engel, A.; Mauer, M. Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023; pp. 1112–1123. [Google Scholar]
  169. Strubell, E.; Ganesh, A.; McCallum, A. Energy and Policy Considerations for Modern Deep Learning Research. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13693–13696. [Google Scholar] [CrossRef]
  170. Crawford, K. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021. [Google Scholar]
  171. Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; St. Martin’s Press: New York, NY, USA, 2018. [Google Scholar]
  172. Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society; Harvard Data Science Review: Boston, MA, USA, 2019. [Google Scholar] [CrossRef]
  173. UNESCO. Recommendation on the Ethics of Artificial Intelligence; United Nations Educational, Scientific and Cultural Organization: Paris, France, 2022. [Google Scholar]
Figure 1. Google Trends for “ChatGPT” (https://trends.google.com/trends/, (accessed on 1 December 2024).
Figure 1. Google Trends for “ChatGPT” (https://trends.google.com/trends/, (accessed on 1 December 2024).
Digital 05 00024 g001
Table 1. Comparison of this review with examples of recent reviews on ChatGPT.
Table 1. Comparison of this review with examples of recent reviews on ChatGPT.
StudyFocus AreaReview TypeKey ContributionHow This Review Adds Value
Yenduri, Ramalingam, Selvi, Supriya, Srivastava, Maddikunta, Raj, Jhaveri, Prabadevi, Wang, Vasilakos and Gadekallu [15]GeneralNarrative + TechnicalOverview of ChatGPT’s architecture and applicationsAdds sectoral analysis and societal focus
Farhat, Silva, Hassani, Madsen, Sohail, Himeur, Alam and Zafar [14]GeneralBibliometricMaps research trends and clustersBuilds on themes with applied synthesis
Farrokhnia, Banihashem, Noroozi and Wals [18]EducationSWOTIdentifies strengths and risks in educational useExpands to other sectors and policy implications
Sallam [12]HealthcareSystematicReviews medical and educational rolesSituates healthcare in broader digital context
Polat, Topuz, Yıldız, Taşlıbeyaz and Kurşun [16]EducationBibliometricTracks themes in education-related researchAdds ethical and platform-specific insights
Koo [17]GeneralBibliometricWide-ranging survey of disciplines and outputComplements with qualitative, cross-sector synthesis
This reviewHealth, Education, EconomyNarrativeIntegrates applications, ethics, and policy across domainsFocus on digital transformation, equity, and governance
Table 2. Summary of ChatGPT applications in key domains.
Table 2. Summary of ChatGPT applications in key domains.
DomainKey Use CasesOpportunitiesRisks and Challenges
HealthcareTriage bots, patient Q&A, medical summaries, mental health supportSimplified communication, 24/7 assistance, potential for health literacy improvementHallucinations, biased outputs, privacy breaches, reduced human empathy
EducationTutoring, feedback, curriculum generation, AI literacy supportPersonalized learning, efficiency, support for educatorsIntegrity concerns, critical thinking erosion, access inequality
EconomyBusiness writing, forecasting, customer service, marketing contentCost savings, increased productivity, SME empowermentFinancial misinformation, data leakage, automation-related job loss
Table 3. Ethical issues by domain.
Table 3. Ethical issues by domain.
Ethical ConcernHealthcareEducationEconomy
BiasUnequal diagnosis accuracy; exacerbates existing health disparitiesReinforcement of stereotypes; unequal AI feedback qualityDiscriminatory financial modeling; biased customer profiling
HallucinationIncorrect medical advice; risk of harm in clinical triage toolsIncorrect information in learning content; misleads studentsErrors in business analysis; hallucinated forecasts mislead firms
Privacy and Data SecurityPotential HIPAA/GDPR violations through third-party data flowsStudent data misuse; lack of informed consent in LMS toolsConfidential business data leakage; insecure integration
Table 4. Societal impacts of ChatGPT by domain.
Table 4. Societal impacts of ChatGPT by domain.
ThemeHealthcareEducationEconomy
AccessSimplifies language; 24/7 support for underserved usersExpands personalized learning via AI tutorsAids SMEs with content and analysis tools
EquityRisks excluding low digital literacy usersMay widen achievement gaps based on tech accessDigital divide limits adoption in smaller firms
MisinformationAI may give incorrect diagnoses or adviceStudents may absorb false or misleading infoHallucinated data can mislead business decisions
Labor EffectsReduces admin tasks; risks replacing human engagementAutomates feedback; risks deskilling educators/studentsThreatens entry-level jobs; shifts skill demands
TrustChatbots may erode confidence in care qualityOveruse may cause doubt in student authorshipOpaque outputs reduce faith in AI-driven strategies
Table 5. Future research priorities by domain.
Table 5. Future research priorities by domain.
Focus AreaHealthcareEducationEconomy
Clinical Safety and TrustTest output accuracy, support diverse usersN/AN/A
Learning and EquityN/AStudy AI’s impact on retention, fairness, and assessmentsN/A
Business ReliabilityN/AN/AAssess AI-generated content and decision logic
Bias MitigationExamine diagnostic biasAnalyze feedback bias by group/subjectDetect bias in hiring or financial tools
Privacy and GovernanceEnsure HIPAA/GDPR complianceAddress student data protectionPrevent proprietary data leaks
SustainabilityEvaluate energy/water impact of AI toolsIntegrate AI literacy and sustainability in curriculaBalance AI benefits with environmental costs
Multimodality and ReasoningExplore voice/image for clinical useTest multimodal AI for tutoring and learningUse AI across formats in market/customer analytics
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Madsen, D.Ø.; Toston, D.M., II. ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy. Digital 2025, 5, 24. https://doi.org/10.3390/digital5030024

AMA Style

Madsen DØ, Toston DM II. ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy. Digital. 2025; 5(3):24. https://doi.org/10.3390/digital5030024

Chicago/Turabian Style

Madsen, Dag Øivind, and David Matthew Toston, II. 2025. "ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy" Digital 5, no. 3: 24. https://doi.org/10.3390/digital5030024

APA Style

Madsen, D. Ø., & Toston, D. M., II. (2025). ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy. Digital, 5(3), 24. https://doi.org/10.3390/digital5030024

Article Metrics

Back to TopTop