Next Article in Journal
Vision-Degree-Driven Loading Strategy for Real-Time Large-Scale Scene Rendering
Previous Article in Journal
Simulation-Based Development of Internet of Cyber-Things Using DEVS
Previous Article in Special Issue
EMGP-Net: A Hybrid Deep Learning Architecture for Breast Cancer Gene Expression Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World

by
Aggeliki Kelly Fanarioti
and
Kostas Karpouzis
*
Department of Communication, Media and Culture, Panteion University of Social and Political Sciences, GR-176 71 Athens, Greece
*
Author to whom correspondence should be addressed.
Computers 2025, 14(7), 259; https://doi.org/10.3390/computers14070259
Submission received: 19 May 2025 / Revised: 25 June 2025 / Accepted: 26 June 2025 / Published: 30 June 2025
(This article belongs to the Special Issue AI in Its Ecosystem)

Abstract

Artificial Intelligence (AI) is reshaping mental healthcare by enabling new forms of diagnosis, therapy, and patient monitoring. Yet this digital transformation raises complex policy and ethical questions that remain insufficiently addressed. In this paper, we critically examine how AI-driven innovations are being integrated into mental health systems across different global contexts, with particular attention to governance, regulation, and social justice. The study follows the PRISMA-ScR methodology to ensure transparency and methodological rigor, while also acknowledging its inherent limitations, such as the emphasis on breadth over depth and the exclusion of non-English sources. Drawing on international guidelines, academic literature, and emerging national strategies, it identifies both opportunities, such as improved access and personalized care, and threats, including algorithmic bias, data privacy risks, and diminished human oversight. Special attention is given to underrepresented populations and the risks of digital exclusion. The paper argues for a value-driven approach that centers equity, transparency, and informed consent in the deployment of AI tools. It concludes with actionable policy recommendations to support the ethical implementation of AI in mental health, emphasizing the need for cross-sectoral collaboration and global accountability mechanisms.

1. Introduction

Artificial Intelligence is rapidly changing the field of mental health, offering methods and tools that support diagnosis, personalize treatment, and extend access to care through digital platforms. With applications ranging from conversational agents that deliver cognitive-behavioral interventions to predictive models using biometric data, AI is enabling new forms of mental health support [1]; these advances are situated within the broader digital transformation of healthcare, marked by data-driven technologies, remote monitoring, and decision-support tools in clinical and community settings [2].
This transformation comes at a time of acute systemic strain: mental health conditions are rising globally, exacerbated by economic stress, environmental instability, and sociotechnical disruption. The COVID-19 pandemic highlighted both the urgency of improving mental health services and the potential of digital tools to address access gaps [3]. However, despite these advances, major questions remain about how AI can be ethically and equitably deployed in mental health systems.
Concerns include the protection of sensitive personal data, transparency of algorithmic decision making, the potential for bias and harm, and the social impact of replacing or augmenting human care with automated systems [4,5]. These challenges are especially acute in mental health, where context, trust, and human judgment are often integral to effective care. Furthermore, deployment environments are deeply unequal: while some countries are experimenting with AI-assisted diagnostics, others struggle with basic digital infrastructure or mental health workforce shortages.
Policy responses are beginning to address these risks. The World Health Organization’s Global Strategy on Digital Health 2020–2025 promotes the ethical use of digital tools in health, emphasizing human rights, inclusive governance, and equitable access [6], while the European Union’s Artificial Intelligence Act classifies AI systems used in healthcare as “high-risk,” requiring extensive transparency, oversight, and compliance with fundamental rights [7]. Yet, many frameworks remain aspirational or underspecified in how ethical principles translate into practice—particularly in low-resource or decentralized health systems.
In this paper, we offer a strategic review of AI in mental health, viewed through a policy and ethics lens. Unlike studies that focus on technical capabilities or clinical validation, this analysis discusses how AI systems are governed, the values they encode, and the means that international institutions and national governments can deploy to respond to their challenges, aiming to identify both the opportunities and structural limitations that shape AI integration in mental health services. The structure of the paper is as follows. Section 2 surveys related work and conceptual frameworks from both the computer science and health ethics literature, Section 3 describes current AI applications in mental health, including diagnostic support, virtual care platforms, and affective computing tools, and Section 4 examines international and national policy strategies, with a case study on Greece. Then, Section 5 and Section 6 focus on ethical and governance challenges, including data protection, algorithmic bias, and digital exclusion; the paper concludes with recommendations for ethically grounded and policy-aware deployment of AI in mental healthcare.

Methodology

This paper is based on a strategic review originally developed as part of a graduate dissertation examining the intersection of AI, digital transformation, and mental health policy. In this paper, we have extended the original treatise with the additional steps included in the PRISMA [8] extension for scoping reviews (PRISMA-ScR) (PRISMA for Scoping Reviews (PRISMA-ScR), https://www.prisma-statement.org/scoping, last accessed: 31 May 2025). This section presents information related to the methods section of the checklist, results are outlined in Section 4, and a discussion of the results takes place in Section 5 and Section 6.
The review process combined structured literature searches with thematic analysis of international and national policy documents. Academic sources were identified through targeted searches on PubMed, Scopus, and Google Scholar using the following keywords: “artificial intelligence,” “mental health,” “digital transformation,” “algorithmic bias,” and “health ethics”. We narrowed the search to peer-reviewed articles published after 2018 to increase relevance to our research scope (n = 7577), and removed entries repeating across the three databases (n = 5402).
During screening, we removed papers which referred to only a subset of query keywords (n = 1910) and assessed the remaining 260 papers for relevance. During this process, we found that a large number of them (n = 210) mentioned Digital Transformation briefly, while others utilized conventional rule-based methods, e.g., prescripted interaction (n = 30). Finally, we separated the remaining 20 entries to 16 new studies and 4 reviews or meta-reports. This process is summarized in Figure 1.
In parallel, current policy materials were drawn from the World Health Organization, European Commission, UNICEF, and national health strategies, with a focus on documents that explicitly reference digital mental health or AI regulation. Grey literature [9] from civil society and intergovernmental bodies was also reviewed to capture emerging concerns around equity, surveillance, and participatory design.
Then, sources were analyzed thematically with particular attention to values of fairness, transparency, inclusivity, and accountability. These themes informed the structure of the paper, which moves from a review of technological applications to an analysis of ethical and governance challenges, concluding with strategic recommendations for responsible AI integration in mental health care.

2. Related Work and Conceptual Background

The possible impact of artificial intelligence on mental health treatment has generated growing academic interest, particularly in relation to diagnostics, treatment delivery, and self-monitoring applications ([10,11]). A substantial body of work explores how machine learning (ML) and natural language processing (NLP) are applied in psychiatry and digital phenotyping, offering tools that aim to improve prediction, classification, and personalization of mental healthcare [2,12]. These developments have been supported by the proliferation of mobile health (mHealth) applications and wearable technologies that capture behavioral and physiological data relevant to mental health assessment.
Despite these advances, the integration of AI into mental health systems has raised a number of ethical, legal, and policy-related concerns. Researchers have questioned the adequacy of existing data protection regimes, the risks of algorithmic discrimination or bias, the transparency of decision-making processes, and the broader social implications of substituting human interaction with automated tools [4,5]. In particular, mental health presents distinctive ethical challenges due to the sensitivity of patient data, the potential for stigma, and the vulnerability of user populations. The literature has called for frameworks that center human dignity, autonomy, and the principle of “do no harm” in AI design and deployment [7].
From a policy perspective, several institutions have developed guidance documents and strategic roadmaps aimed at governing digital mental health technologies. The World Health Organization’s Global Strategy on Digital Health 2020–2025 underscores the importance of global coordination and equity in digital health innovation, while emphasizing the need to strengthen digital governance systems [6]. At the European level, the Artificial Intelligence Act proposes a risk-based regulatory framework that treats AI tools in healthcare as high-risk applications, requiring thorough documentation, algorithmic transparency, and human oversight.
Beyond institutional policy, critical scholars have highlighted the limitations of techno-centric approaches and called for a more nuanced understanding of the political and social contexts in which AI systems operate. Sharon [13], for instance, argued that digital health infrastructures risk amplifying existing inequities unless questions of trust, participation, and power asymmetries are explicitly addressed. Likewise, the governance of digital mental health must contend with issues such as cross-border data flows, private sector involvement, and the digital divide—especially when implemented in low-resource settings.
In the following sections, we build upon these contributions by offering a strategic review that synthesizes ethical discourse, regulatory initiatives, and implementation challenges in the context of AI-driven mental health. Our work positions itself at the intersection of computer science, global health policy, and applied ethics, proposing a framework for assessing readiness and responsibility in the deployment of mental health AI technologies.

3. Digital Transformation and AI in Mental Health

Digital transformation in mental health refers to the integration of information and communication technologies, including AI-based systems, into mental health services. This transformation is not limited to the digitization of existing practices, but increasingly involves the development of novel tools that expand, automate, or decentralize care delivery, and sometimes even the redesign of established practices and workflows from scratch. AI systems now support a range of functions including diagnosis, therapy, monitoring, triage, and patient engagement [2,14]. Among the most widely adopted technologies are telehealth platforms that facilitate synchronous (e.g., video consultations) and asynchronous (e.g., app-based messaging) interactions between mental health professionals and patients. These systems have been particularly effective in increasing access for underserved populations in rural or remote areas, especially during the COVID-19 pandemic [1].
AI-assisted diagnostics represent a second major area of growth. Machine learning models are used to analyze speech, facial expressions, social media activity, or wearable data to detect early signs of depression, anxiety, and other mood disorders [2,12]. Natural language processing (NLP) and affective computing techniques have been applied to identify biomarkers of distress or suicidal ideation in textual and audio inputs.
Digital therapeutics and chatbot-based interventions form a third important category. Tools such as Woebot and Wysa offer conversational cognitive behavioral therapy (CBT), mindfulness prompts, and psychoeducation without human mediation [15]. These systems are increasingly used in low-threshold, self-help contexts and for population-level mental health promotion, though their clinical validation remains limited.
Beyond emotional detection, some AI systems now simulate companionship and intimacy, creating the illusion of affective reciprocity. Applications like Replika and Character.AI illustrate how generative models can engage users in sustained, emotionally charged interactions that feel socially meaningful, despite lacking genuine understanding or care. These systems raise new ethical questions about attachment, dependency, and emotional vulnerability. If such a system is reprogrammed, monetized, or withdrawn, as occurred with Replika’s removal of romantic features (The confusing reality of AI friends, https://www.theverge.com/c/24300623/ai-companions-replika-openai-chatgpt-assistant-romance, last accessed: 31 May 2025), users may experience real psychological distress. The boundaries between therapeutic support, entertainment, and manipulation become increasingly blurred.
Monitoring and prediction systems use mobile sensors, app data, and biometric inputs to track mood patterns, sleep quality, and behavioral anomalies. These can be combined with predictive analytics to anticipate depressive episodes or treatment dropouts. Predictive systems are being explored as components of stepped-care models and relapse prevention strategies [16]. Notable examples of this category include Stepped Care 2.0 (Stepped Care 2.0, https://mentalhealthcommission.ca/what-we-do/access/stepped-care-2-0/, last accessed: 31 May 2025), an evidence-informed model that integrates in-person and virtual care, SilverCloud (SilverCloud, https://www.silvercloudhealth.com/uk/programmes/tools/stepped-care-model, last accessed: 31 May 2025), a clinically tested digital platform which provides patient support from self-management to guided, evidence-based programs and secondary care, and TourHeart+, a web-based stepped-care platform which supports mental well-being by offering internet-based interventions [17].
Virtual reality (VR) and augmented reality (AR) environments are also being explored for use in exposure therapy, cognitive training, and social skill development. These immersive technologies have shown early promise in the treatment of anxiety disorders and PTSD [18], but the cost and the sociotechnical complexity of scaling to multiple users, remote locations, and varied curriculum may be a limiting factor [19].
Generative AI systems capable of producing realistic text, images, audio, and video introduce new mental health risks and regulatory challenges. While these tools do offer creative and therapeutic opportunities, they also enable the generation of potentially disturbing, violent, or delusional content, with individuals misusing them to reinforce negative self-images, simulate traumatic events, or isolate themselves within artificial realities. These dynamics can be especially harmful to users with existing vulnerabilities, including depression or psychosis, where distinctions between generated and lived experience may become psychologically destabilizing. Moreover, there is growing concern that generative AI contributes to disembodied identity formation, reality distortion, and compulsive content creation, all of which may require new forms of digital mental health literacy and governance [20].
Finally, integrated hybrid care models combine AI-supported interventions with traditional services. These models aim to maintain human oversight while leveraging automation to reduce costs, personalize care, and extend reach. However, hybridization raises its own challenges, including interoperability, continuity of care, and clinician workload management.
Despite their potential, most of these tools face critical implementation barriers, including lack of robust clinical validation, limited regulation of safety and efficacy, data governance challenges, and inconsistent integration into national health systems. Moreover, many applications are developed by private sector actors whose interests may not align with public health priorities [13].

Case Studies

In this subsection, we briefly review prominent digital mental health tools that have received public attention and funding, as a basis for discussion on the strengths and limitations of current AI-based interventions.
Woebot, a well-known chatbot grounded in principles of cognitive behavioral therapy, has demonstrated short-term effectiveness in alleviating anxiety and depressive symptoms, especially among younger users. Its user-friendly interface and availability outside traditional care settings have been lauded as important steps toward accessible psychological support. However, the system’s reliance on scripted interactions and its inability to recognize or respond to crisis language (e.g., expressions of suicidality) have prompted concerns about safety and accountability in clinical contexts. Despite its therapeutic framing, Woebot does not operate under formal regulatory oversight as a medical device in most jurisdictions, and there is limited transparency about the boundaries of its functionality.
A similar pattern emerges with Wysa, an AI-enabled self-help platform increasingly integrated into workplace wellness programs and primary care settings. While it includes optional human coaching, the default experience is automated, and questions remain about the long-term efficacy of such hybrid models, particularly when deployed in underresourced systems. Ethical critiques have also highlighted insufficient clarity on data governance, especially in corporate use cases where sensitive mental health interactions may occur without fully informed consent or user awareness of third-party data access.
Mindstrong was a venture-backed U.S. startup that aimed to use smartphone metadata, such as typing speed and sleep patterns, to detect early signs of mental health deterioration. Their platform was eventually shut down in 2022 after internal concerns about scientific validity and a lack of demonstrable outcomes; this case underscores the difficulty of operationalizing predictive analytics in real-world mental health contexts, where noisy data, ethical ambiguity, and the need for clinical nuance often outpace the promises of AI-driven optimization.
These examples illustrate a broader tension in the digital mental health space: AI systems are increasingly positioned as scalable solutions to long-standing mental health access gaps, yet many of them are introduced without independent evaluation, longitudinal evidence, or mechanisms for participatory user feedback. The asymmetry between technical development and ethical infrastructure risks eroding trust among users and practitioners alike. If these systems are to play a meaningful role in future mental health ecosystems, they must be embedded within robust frameworks of clinical accountability, regulatory clarity, and co-designed development processes that center the experiences and rights of users—not just the ambitions of innovation.
When taken together, these three cases reveal a structural pattern in the deployment of AI systems for mental health care: while technical innovation is often rapid, visible, and well-funded, the ethical, clinical, and regulatory infrastructures necessary to support these systems tend to lag behind. Woebot demonstrates how even well-intentioned tools can inadvertently give a false sense of safety and therapeutic adequacy when deployed without formal oversight or safeguards for crisis scenarios, while Wysa highlights concerns about data transparency and user agency, particularly in commercial or institutional contexts where the platform may serve multiple, and potentially conflicting, stakeholders. Mindstrong, by contrast, shows the consequences of overpromising the predictive capabilities of AI in high-stakes domains without robust scientific validation or sustainable implementation models. Despite their different approaches and outcomes, all three underscore the same core lesson: without synchronized development between technological capability and ethical governance, such tools risk undermining user trust, misrepresenting clinical value, and reinforcing systemic gaps in care. This persistent tension between the speed of innovation and the slowness of institutional adaptation is not an incidental problem, but a defining characteristic of contemporary digital mental health, requiring urgent attention from regulators, developers, and mental health professionals alike.

4. Policy Frameworks and Strategic Initiatives

The rapid adoption of AI in mental healthcare has prompted international organizations and national governments to articulate policies aimed at guiding development, deployment, and oversight. These policy frameworks vary significantly in scope, specificity, and enforceability, but collectively reflect growing awareness of the need for governance mechanisms that ensure safety, equity, and ethical compliance.
At the international level, the World Health Organization (WHO) has been instrumental in promoting strategic coordination through its Global Strategy on Digital Health 2020–2025 [6]. The document outlines goals such as strengthening national digital health ecosystems, ensuring interoperability, and establishing governance structures that promote trust. It explicitly calls for capacity-building, data protection, and inclusive stakeholder participation—especially in low- and middle-income countries. Mental health is identified as a priority area where digital interventions, including AI, could enhance reach and efficiency, provided ethical safeguards are in place.
The European Union (EU) has taken a more regulatory approach. The proposed Artificial Intelligence Act (2021) classifies AI systems used in healthcare as “high-risk” and requires developers to meet stringent transparency, accountability, and data management criteria [21]. The Act mandates impact assessments, documentation of training datasets, and human oversight mechanisms. It also includes prohibitions on certain uses of AI deemed incompatible with fundamental rights. In parallel, the EU’s Comprehensive Approach to Mental Health (2023) links digital innovation to the broader goal of reducing the annual cost of inaction on mental illness, estimated at over EUR 600 billion [22].
Other bodies such as UNICEF and the Council of Europe have also weighed in. UNICEF’s Mental Health Innovation Portfolio emphasizes the potential of chatbot-driven platforms, VR interventions, and big data analytics, especially for children and adolescents. It also supports partnerships with tech companies and governments to scale promising digital tools, but highlights the need for co-creation and evidence-based implementation [23]. The Council of Europe, while lacking legislative authority in health policy, has issued human rights-based recommendations stressing data privacy, consent, and algorithmic fairness—particularly in light of the General Data Protection Regulation (GDPR) and the European Convention on Human Rights.
At the national level, policies differ according to digital maturity, health infrastructure, and political will. For example, countries such as Finland and Germany have integrated digital mental health services into public health insurance schemes, while others rely heavily on NGO or private-sector initiatives. In many contexts, national strategies lack specific provisions for mental-health-related AI tools, relying instead on general digital health laws or AI ethics guidelines. Greece is a prominent example here: despite recent digital modernization efforts, mental healthcare remains underresourced, with long-standing gaps in service coverage, workforce distribution, and public funding. While Greece has adopted the European Digital Compass and participates in EU-wide digital health initiatives, there is no dedicated national framework for AI in mental health. Pilot programs exist—mainly driven by NGOs or university consortia—but they remain fragmented and dependent on external funding. In contrast, countries such as Finland and the United Kingdom have adopted more coordinated and future-oriented approaches to integrating AI in health systems: Finland’s national AI strategy emphasizes trust, transparency, and citizen engagement, including sector-specific ethical guidelines and strong public sector capacity [24], while the UK’s NHS AI Lab [25] has created targeted funding instruments and evaluation protocols to support safe experimentation and public-value alignment in AI-driven health services. While these models are not without challenges, they illustrate how proactive governance and institutional coherence can facilitate ethical and scalable AI adoption.
These differences highlight a central challenge: while international frameworks increasingly recognize the importance of ethical and inclusive AI in mental health, national implementation remains uneven. There is a risk that global best practices will remain aspirational unless translated into enforceable local policy, supported by institutional capacity and stakeholder engagement: the former includes not only legal mandates and technical infrastructure, but also adequately resourced regulatory agencies, coordinated digital health strategies, and trained personnel within public health systems, while stakeholder engagement refers to the meaningful inclusion of mental health professionals, service users, civil society organizations, and local communities in the design, evaluation, and governance of AI applications. The next section addresses the key ethical and governance issues that must be resolved to ensure that such translation is not only technically feasible but normatively robust.

5. Ethical Challenges and Governance Gaps

As AI technologies become increasingly embedded in mental health services, ethical questions regarding their development, deployment, and oversight grow more urgent. These concerns are not only technical but structural, intersecting with broader questions of fairness, inclusion, and human rights. This section identifies three core ethical domains—privacy and consent, algorithmic accountability, and access and equity—where significant governance gaps persist.

5.1. Data Privacy and Informed Consent

AI-driven mental health tools rely on the continuous collection and processing of sensitive personal data, including behavioral, emotional, and biometric indicators. In mobile health (mHealth) and digital phenotyping contexts, users may be unaware of the extent to which their data is being tracked, repurposed, or shared with third parties [5], while informed consent processes are often reduced to passive, opt-in digital agreements that do not reflect a nuanced understanding of the potential risks—particularly in populations experiencing psychological distress. Even when consent is, indeed, obtained, data protection mechanisms are frequently inadequate. Mental health data is especially vulnerable to misuse, whether for discriminatory purposes (e.g., in employment or insurance) or through reidentification from anonymized datasets. While the General Data Protection Regulation (GDPR) offers a strong regulatory foundation in the European context, enforcement remains uneven, and protections are less robust in many low- and middle-income countries.

5.2. Algorithmic Bias and Clinical Accountability

AI models used in mental health are typically trained on limited datasets that may not capture the cultural, linguistic, or socio-demographic diversity of real-world populations. As a result, these systems risk reproducing existing health disparities or introducing new forms of bias—particularly in diagnosis and triage [4]. For instance, emotion recognition systems may perform poorly across different ethnic groups or misinterpret affective signals due to cultural variation [26]. In addition to this, the opacity of many machine learning systems complicates clinical accountability. When AI systems produce recommendations or risk scores, it is often unclear how these outputs are derived. This “black box” problem limits clinicians’ ability to interrogate the rationale behind algorithmic decisions, raising concerns about liability and due process in clinical care [27]. Calls for explainable AI (XAI) in healthcare have emerged as a response, but there is limited consensus on what constitutes meaningful explanation in complex psychiatric contexts, in addition to doubts related to algorithmic innovation and protection of intellectual property.
In addition, as AI systems are increasingly integrated into clinical and policy workflows, there is a risk that decisions with profound emotional and social consequences will become dehumanized. AI triage systems, automated referrals, and mental health chatbots may expedite service delivery, but they also risk creating a “bureaucratic distancing effect” where human judgment and empathy are outsourced to algorithmic systems. If not carefully designed, such systems may obscure responsibility, limit recourse for appeals, and weaken the relational aspect of care. These trends reflect a broader concern with “AI-mediated bureaucracy”, in which institutions adopt algorithmic logic at the expense of human discretion and emotional attunement [28].

5.3. Digital Exclusion and Structural Inequity

Digital mental health tools, including AI-based interventions, often assume reliable internet access, digital literacy, and the presence of enabling technologies such as smartphones or wearables. These assumptions may not hold across diverse populations, leading to the exclusion of already marginalized groups—including older adults, refugees, low-income individuals, and those in remote areas [29]. In these cases, AI tools may inadvertently reinforce systemic inequities by allocating resources based on data availability rather than clinical need. Furthermore, many tools are developed and deployed by private companies whose commercial priorities may conflict with public health goals. In the absence of strong regulatory frameworks or public accountability mechanisms, there is a risk of data commodification, lock-in to proprietary ecosystems, and the erosion of clinician–patient relationships. Addressing these structural barriers also requires sustainable funding and strategic partnerships: scaling equitable AI deployment in mental health will not be feasible without long-term investment in infrastructure, open-source tool development, and local capacity-building. Public–private partnerships (PPPs) often do combine resources with technical expertise, but they must be carefully governed to align with public interest goals and prevent commercial capture. For example, the UK’s NHSX initiative (NHS Transformation Directorate, https://transform.england.nhs.uk/digitise-connect-transform/our-strategy-to-digitise-connect-and-transform/, last accessed: 31 May 2025) has piloted AI-driven mental health tools while embedding governance structures that promote transparency and ethical oversight, while UNICEF’s AI for Children (UNICEF, AI for Children, https://www.unicef.org/innocenti/projects/ai-for-children, last accessed: 31 May 2025) initiative illustrates how cross-sector collaboration can address equity and data protection risks in technology deployments affecting vulnerable populations. To ensure that such partnerships contribute meaningfully to mental health equity, transparent procurement frameworks, regulatory scrutiny, and stakeholder participation are essential.

6. Strategic Preconditions for Ethical AI Integration

The effective and ethical integration of AI into mental health systems requires more than technical robustness or regulatory compliance: it calls for a multi-level approach that addresses governance, design, evaluation, and capacity-building. In this section, we identify three strategic preconditions essential to the responsible deployment of AI in mental healthcare: values-based design, oversight and evaluation, and digital readiness through education and training.

6.1. Values-Based Design and Participatory Approaches

The design of AI systems for mental health should be informed by core ethical values such as autonomy, fairness, dignity, and transparency. While these principles are often cited in guidelines, their operationalization remains inconsistent. Translating high-level ethics into design features requires interdisciplinary collaboration across software engineering, clinical psychology, user experience design, and social sciences [7]. Participatory approaches can help bridge this gap. Co-design methods that involve clinicians, patients, caregivers, and marginalized users in the development process enhance contextual sensitivity and trustworthiness [13]. Such methods are particularly important in mental health, where lived experience plays a key role in evaluating the relevance, tone, and impact of digital interventions.
Translation of abstract ethical values into concrete design choices remains a critical challenge in AI development. For example, the principle of autonomy can be embodied in mental health apps by allowing users to set personalized interaction thresholds, choose between different types of support (e.g., structured CBT prompts vs. open-ended journaling), and control when and how data is collected or shared. A commitment to fairness, meanwhile, might involve building diverse and representative training datasets, as well as testing system outputs for performance disparities across demographic groups. Tools like differential privacy and model interpretability can also help ensure that fairness and transparency are not afterthoughts, but embedded throughout the system lifecycle. These kinds of design choices require collaboration not only between engineers and clinicians, but also with end-users, especially those from vulnerable or marginalized populations, whose perspectives are crucial to identifying ethical blind spots and defining what “value” truly means in context.

6.2. Evaluation, Oversight, and Accountability Structures

The deployment of AI systems in clinical or community settings must be accompanied by continuous evaluation mechanisms. These should assess not only accuracy and efficacy but also social impact, accessibility, and potential harms. Randomized controlled trials (RCTs) remain a gold standard, but real-world monitoring and post-deployment auditing are also essential. Related oversight structures should be both internal and external. Internally, developers and healthcare institutions must implement ethics review boards, bias testing protocols, and documentation standards. Externally, public regulators should enforce independent evaluations, safety certifications, and grievance redress mechanisms. The concept of “algorithmic impact assessment,” already emerging in digital governance, could serve as a model for high-risk mental health applications [4].

6.3. Education, Literacy, and Workforce Development

Digital and AI-based mental health tools can only be scaled equitably if clinicians, support staff, and users are equipped with the necessary competencies. Mental health professionals must understand the capabilities and limitations of AI systems to use them critically and responsibly. Similarly, users need adequate digital literacy to make informed choices and protect their rights [29]. National digital health strategies should, therefore, include targeted training programs, ethical AI curricula in medical and computer science education, and public awareness campaigns. Special attention should be paid to non-specialist health workers and underresourced regions, where AI tools may be used to extend care in the absence of formal psychiatric services.
Together, these three domains—design, governance, and literacy—form a strategic foundation for ethically aligned AI in mental health. While policy and regulation provide essential guardrails, ethical integration ultimately depends on how these principles are embedded into everyday practices of development, deployment, and care.

7. Discussion

In this paper, we have examined the integration of artificial intelligence into mental health systems in the context of ethics and policy. While much of the existing literature emphasizes technical feasibility, diagnostic accuracy, or clinical potential, this analysis has foregrounded the broader social, regulatory, and institutional challenges that shape AI implementation. The discussion now turns to two broader implications: the reframing of AI innovation through a public interest lens, and the need for future research to address unresolved governance questions.
AI is often framed as a fix for systemic gaps in mental health care; however, this narrative risks obscuring the political and economic roots of these challenges. For AI to improve mental health at scale, its development and deployment must be aligned not only with individual autonomy and privacy, but with collective values such as social justice, democratic oversight, and solidarity [13,27]. This requires shifting from a market-driven model of digital innovation toward one rooted in the public interest.
Future research must bridge the gap between abstract ethical principles and practical implementation. What institutional models can turn values into enforceable standards? How can algorithmic impact assessments be adapted for mental health contexts? What forms of participatory governance are feasible in underresourced settings? These questions require interdisciplinary approaches that combine technical insight with policy analysis, qualitative research, and legal scholarship.
When it comes to institutional models able to translate values into enforceable standards, future research may use other high-risk AI domains such as biometric surveillance, automated credit scoring, and employment screening as an example. Emerging governance frameworks in these areas include algorithmic auditing procedures, red-teaming protocols [30], and regulatory sandboxes [31], i.e., controlled environments in which AI systems are evaluated in practice before deployment. For example, the European Commission’s proposed AI Liability Directive (European Commission, Liability Rules for Artificial Intelligence, https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en, last accessed: 11 June 2025) and the OECD’s AI Risk Classification Framework both outline mechanisms for accountability, including documentation obligations, third-party oversight, and domain-specific risk thresholds. In the mental health context, such mechanisms could support the operationalization of algorithmic impact assessments by linking them to institutional review boards, health data custodians, or public registries of AI tools in clinical use. Exploring these cross-sectoral analogues offers a promising foundation for designing domain-sensitive governance structures that are both flexible and enforceable.
Moreover, national strategies must be context-specific. As the case of Greece illustrates, digital mental health policy cannot be separated from broader questions of health infrastructure, administrative capacity, and public trust. International frameworks may offer useful templates, but their local uptake depends on political will, stakeholder engagement, and institutional readiness.

Limitations

This study presents a strategic review informed by the PRISMA-ScR methodology, combining a scoping review of academic literature with thematic analysis of policy and grey sources. While this approach offers a broad and interdisciplinary perspective, it also presents specific limitations, starting with the selection process for academic literature: guided by inclusion criteria and keyword-based searches, our search was constrained by database coverage (limited to PubMed, Scopus, and Google Scholar), language (English-only sources), and publication date (post-2018). As a result, relevant contributions in non-indexed journals or non-English contexts may have been excluded, potentially biasing the sample toward Western or Global North perspectives.
In addition, while the PRISMA-ScR methodology facilitates transparency in reporting and emphasizes breadth over depth, it does not require critical appraisal of study quality. Therefore, while we identified key themes and trends, we did not conduct formal assessments of evidence robustness, clinical efficacy, or methodological rigor across included studies, effectively limiting the extent to which our findings can support causal claims or fine-grained comparative evaluations of AI applications in mental health.
Another limitation has to do with the thematic synthesis of policy documents and grey literature, which is essential for capturing current governance landscapes, but introduces subjectivity in interpretation. These documents vary in scope, specificity, and political context, and our coding framework, while value-driven (e.g., fairness, accountability), reflects certain normative assumptions that may not be universally shared. Moreover, given the dynamic nature of both AI technologies and mental health governance, our review offers a snapshot in time that may soon be outdated.
Finally, the inclusion of illustrative case studies (e.g., Woebot, Wysa, Mindstrong) highlights important ethical tensions, but these examples were chosen for their prominence and accessibility rather than systematic representativeness. Future research should extend this work with empirical studies, stakeholder interviews, and comparative policy analysis to test the generalizability of the conclusions presented here.

8. Conclusions

Artificial intelligence has shown the potential to support more personalized, efficient, and accessible mental health services. However, realizing this potential requires ethical integration, strategic foresight, and regulatory clarity. In this paper, we argued that the responsible deployment of AI in mental health depends on three core preconditions: values-based and participatory design, robust oversight and accountability mechanisms, and investment in digital literacy and workforce training. Current international strategies, including those from the WHO and EU, reflect growing recognition of these needs, but practical implementation remains uneven across national contexts. Without concerted efforts to bridge this gap, AI risks reinforcing existing inequalities, eroding trust in mental health services, and compromising fundamental rights.
To address these risks and promote inclusive innovation, the paper offers the following policy recommendations:
  • National governments should incorporate AI governance into mental health policy, with attention to data protection, bias mitigation, and inclusive design;
  • Health ministries and public regulators should mandate impact assessments and post-deployment audits for AI tools used in clinical or community mental health;
  • Training programs in ethical AI should be embedded in medical, psychological, and computing education [32], alongside public digital literacy initiatives;
  • Funding schemes should support participatory design research that centers marginalized voices and lived experience in mental health AI development.
As AI systems become more prevalent in mental healthcare, their legitimacy will depend not only on technical performance, but on whether they uphold the rights, dignity, and diverse needs of those they are meant to serve.

Author Contributions

Conceptualization, A.K.F. and K.K.; methodology, K.K.; investigation, A.K.F.; writing—original draft preparation, A.K.F.; writing—review and editing, A.K.F. and K.K.; supervision, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shimada, K. The role of artificial intelligence in mental health: A review. Sci. Insights 2023, 43, 1119–1127. [Google Scholar] [CrossRef]
  2. Bzdok, D.; Meyer-Lindenberg, A. Machine learning for precision psychiatry: Opportunities and challenges. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 2018, 3, 223–230. [Google Scholar] [CrossRef] [PubMed]
  3. Holmes, E.A.; O’Connor, R.C.; Perry, V.H.; Tracey, I.; Wessely, S.; Arseneault, L.; Ballard, C.; Christensen, H.; Silver, R.C.; Everall, I.; et al. Multidisciplinary research priorities for the COVID-19 pandemic: A call for action for mental health science. Lancet Psychiatry 2020, 7, 547–560. [Google Scholar] [CrossRef] [PubMed]
  4. Morley, J.; Machado, C.C.; Burr, C.; Cowls, J.; Joshi, I.; Taddeo, M.; Floridi, L. The ethics of AI in health care: A mapping review. Soc. Sci. Med. 2020, 260, 113172. [Google Scholar] [CrossRef]
  5. Vayena, E.; Blasimme, A.; Cohen, I.G. Machine learning in medicine: Addressing ethical challenges. PLoS Med. 2018, 15, e1002689. [Google Scholar] [CrossRef]
  6. World Health Organization. Global Strategy on Digital Health 2020–2025; World Health Organization: Geneva, Switzerland, 2021.
  7. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
  8. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  9. Benzies, K.M.; Premji, S.; Hayden, K.A.; Serrett, K. State-of-the-evidence reviews: Advantages and challenges of including grey literature. Worldviews-Evid.-Based Nurs. 2006, 3, 55–61. [Google Scholar] [CrossRef]
  10. Watkins, E.R.; Warren, F.C.; Newbold, A.; Hulme, C.; Cranston, T.; Aas, B.; Bear, H.; Botella, C.; Burkhardt, F.; Ehring, T.; et al. Emotional competence self-help app versus cognitive behavioural self-help app versus self-monitoring app to prevent depression in young adults with elevated risk (ECoWeB PREVENT): An international, multicentre, parallel, open-label, randomised controlled trial. Lancet Digit. Health 2024, 6, e894–e903. [Google Scholar]
  11. Bralee, E.; Mostazir, M.; Warren, F.C.; Newbold, A.; Hulme, C.; Cranston, T.; Aas, B.; Bear, H.; Botella, C.; Burkhardt, F.; et al. Brief use of behavioral activation features predicts benefits of self-help app on depression symptoms: Secondary analysis of a selective prevention trial in young people. J. Consult. Clin. Psychol. 2025, 93, 293. [Google Scholar] [CrossRef]
  12. Milne-Ives, M.; Selby, E.; Inkster, B.; Lam, C.; Meinert, E. Artificial intelligence and machine learning in mobile apps for mental health: A scoping review. PLoS Digit. Health 2022, 1, e0000079. [Google Scholar] [CrossRef] [PubMed]
  13. Sharon, T. When digital health meets digital capitalism, how many common goods are at stake? Big Data Soc. 2018, 5, 2053951718819032. [Google Scholar] [CrossRef]
  14. Keles, S. Navigating in the moral landscape: Analysing bias and discrimination in AI through philosophical inquiry. AI Ethics 2023, 5, 555–565. [Google Scholar] [CrossRef]
  15. Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Ment. Health 2017, 4, e7785. [Google Scholar] [CrossRef] [PubMed]
  16. Birk, R.H.; Samuel, G. Digital phenotyping for mental health: Reviewing the challenges of using data to monitor and predict mental health problems. Curr. Psychiatry Rep. 2022, 24, 523–528. [Google Scholar] [CrossRef]
  17. Mak, W.W.; Ng, S.M.; Leung, F.H. A web-based stratified stepped care platform for mental well-being (TourHeart+): User-centered research and design. JMIR Form. Res. 2023, 7, e38504. [Google Scholar] [CrossRef]
  18. Valmaggia, L.R.; Latif, L.; Kempton, M.J.; Rus-Calafell, M. Virtual reality in the psychological treatment for mental health problems: An systematic review of recent evidence. Psychiatry Res. 2016, 236, 189–195. [Google Scholar] [CrossRef]
  19. Allers, S.; Carboni, C.; Eijkenaar, F.; Wehrens, R. A Cross-Disciplinary Analysis of the Complexities of Scaling Up eHealth Innovation. J. Med. Internet Res. 2024, 26, e58007. [Google Scholar] [CrossRef]
  20. Elyoseph, Z.; Gur, T.; Haber, Y.; Simon, T.; Angert, T.; Navon, Y.; Tal, A.; Asman, O. An ethical perspective on the democratization of mental health with generative AI. JMIR Ment. Health 2024, 11, e58011. [Google Scholar] [CrossRef]
  21. European Commission. Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act); COM/2021/206 Final; European Commission: Brussels, Belgium, 2021. [Google Scholar]
  22. European Commission. A Comprehensive Approach to Mental Health; COM(2023) 298 Final; European Commission: Brussels, Belgium, 2023. [Google Scholar]
  23. UNICEF Office of Innovation. Mental Health Innovation Portfolio; UNICEF: New York, NY, USA, 2022. [Google Scholar]
  24. Ministry of Economic Affairs and Employment. Leading the Way into the Age of Artificial Intelligence: Final Report of Finland’s Artificial Intelligence Programme 2019; Ministry of Economic Affairs and Employment: Helsinki, Finland, 2019.
  25. Kerstein, R. NHS X-plained. Bull. R. Coll. Surg. Engl. 2020, 102, 48–49. [Google Scholar] [CrossRef]
  26. Pantic, M.; Caridakis, G.; André, E.; Kim, J.; Karpouzis, K.; Kollias, S. Multimodal emotion recognition from low-level cues. In Emotion-Oriented Systems: The Humaine Handbook; Springer: Berlin/Heidelberg, Germany, 2010; pp. 115–132. [Google Scholar]
  27. London, A.J. Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Cent. Rep. 2019, 49, 15–21. [Google Scholar] [CrossRef] [PubMed]
  28. Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; St. Martin’s Press: New York, NY, USA, 2018. [Google Scholar]
  29. Karizat, N.; Vinson, A.H.; Parthasarathy, S.; Andalibi, N. Patent applications as glimpses into the sociotechnical imaginary: Ethical speculation on the imagined futures of emotion AI for mental health monitoring and detection. Proc. ACM Hum.-Comput. Interact. 2024, 8, 1–43. [Google Scholar] [CrossRef]
  30. Feffer, M.; Sinha, A.; Deng, W.H.; Lipton, Z.C.; Heidari, H. Red-Teaming for generative AI: Silver bullet or security theater? In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, San Jose, CA, USA, 21–23 October 2024; Volume 7, pp. 421–437. [Google Scholar]
  31. Truby, J.; Brown, R.D.; Ibrahim, I.A.; Parellada, O.C. A sandbox approach to regulating high-risk artificial intelligence applications. Eur. J. Risk Regul. 2022, 13, 270–294. [Google Scholar] [CrossRef]
  32. Panagopoulou, F.; Parpoula, C.; Karpouzis, K. Legal and ethical considerations regarding the use of ChatGPT in education. arXiv 2023, arXiv:2306.10037. [Google Scholar]
Figure 1. Summary of PRISMA-related information.
Figure 1. Summary of PRISMA-related information.
Computers 14 00259 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fanarioti, A.K.; Karpouzis, K. Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World. Computers 2025, 14, 259. https://doi.org/10.3390/computers14070259

AMA Style

Fanarioti AK, Karpouzis K. Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World. Computers. 2025; 14(7):259. https://doi.org/10.3390/computers14070259

Chicago/Turabian Style

Fanarioti, Aggeliki Kelly, and Kostas Karpouzis. 2025. "Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World" Computers 14, no. 7: 259. https://doi.org/10.3390/computers14070259

APA Style

Fanarioti, A. K., & Karpouzis, K. (2025). Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World. Computers, 14(7), 259. https://doi.org/10.3390/computers14070259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop