Next Article in Journal
Vision-Degree-Driven Loading Strategy for Real-Time Large-Scale Scene Rendering
Next Article in Special Issue
From Patterns to Predictions: Spatiotemporal Mobile Traffic Forecasting Using AutoML, TimeGPT and Traditional Models
Previous Article in Journal
Simulation-Based Development of Internet of Cyber-Things Using DEVS
Previous Article in Special Issue
EMGP-Net: A Hybrid Deep Learning Architecture for Breast Cancer Gene Expression Prediction
 
 
Systematic Review
Peer-Review Record

Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World

Computers 2025, 14(7), 259; https://doi.org/10.3390/computers14070259
by Aggeliki Kelly Fanarioti and Kostas Karpouzis *
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4: Anonymous
Computers 2025, 14(7), 259; https://doi.org/10.3390/computers14070259
Submission received: 19 May 2025 / Revised: 25 June 2025 / Accepted: 26 June 2025 / Published: 30 June 2025
(This article belongs to the Special Issue AI in Its Ecosystem)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This review article is highly timely, and nicely inclusive of a range of ethical and policy considerations, citing key international guidelines and references in doing so. 

I have two main pieces of advice, that I think may actually be necessary to follow for this to be a lasting and highly cited publication.

First, given the title it needs to go into a lot more detail regarding exactly how AI functions in a way that can affect people's goals, relationships and mental health. Most of the article could apply as well to digital systems generally, yet the rise in generative AI in the last few years is unprecedented and profound is several ways. These includes the ethics of developing attachments and companionship with AI and the vulnerability that provides, for example if the AI system is withdrawn or reprogrammed (see incidents with Replica in the public domain). It also includes the increasing capacity for people to create or generate image or video of whatever they want, regardless of whether this is legal, realistic, or good or bad for their own mental health. Finally, there is the issue of AI-mediated bureacracy to the extent that policy and clinical decisions could be increasingly 'offshored' to AI and the human responsibility and connection of such decisions made even more distant. 

Second, and related to the above, the whole article, although well written, seems exceedingly academic and distant, given the subject matter. Increasingly the living and lived experience of mental health challenges have proved vital for making any recommendations; I can see that the article makes this case - but the article itself seems bereft of first-person accounts or contribution. I feel that one or more of the following is needed: (a) a lived experience coauthor, (b) vignettes or personal accounts to illustrate the issues - both good and bad - of AI and digital mental health, (c) case examples of the strengths and risks of recent specific digital mental health innovations, pointing to how they illustrate the points being made in the article. 

A theoretical lens to the article would also be appreciated, but not essential. 

Overall this is a well written overview that needs grounding in the complexities of the interface between AI and people's lives. 

Author Response

We would like to thank the reviewer for their detailed and constructive comments. 

Comment 1: "given the title it needs to go into a lot more detail regarding exactly how AI functions in a way that can affect people's goals, relationships and mental health[...] the rise in generative AI in the last few years is unprecedented and profound is several ways. These includes the ethics of developing attachments and companionship with AI and the vulnerability that provides, for example if the AI system is withdrawn or reprogrammed [...] there is the issue of AI-mediated bureacracy to the extent that policy and clinical decisions could be increasingly 'offshored' to AI and the human responsibility and connection of such decisions made even more distant. "

Revision 1: We have added material on affective computing, AI companions, and parasociality in Section 3, introduced generative AI as a distinct subtheme in the same Section, and added a paragraph on human disconnection in algorithmic governance in Section 5.

Comment 2: "Second, and related to the above, the whole article, although well written, seems exceedingly academic and distant, given the subject matter. Increasingly the living and lived experience of mental health challenges have proved vital for making any recommendations; I can see that the article makes this case - but the article itself seems bereft of first-person accounts or contribution. I feel that one or more of the following is needed: (a) a lived experience coauthor, (b) vignettes or personal accounts to illustrate the issues - both good and bad - of AI and digital mental health, (c) case examples of the strengths and risks of recent specific digital mental health innovations, pointing to how they illustrate the points being made in the article. "

Revision 2: Unfortunately, we are not able to introduce additional co-authors or personal accounts at this stage. To address this comment, we added a 'case studies' subsection containing actual examples that merit further investigation.

Reviewer 2 Report

Comments and Suggestions for Authors

I found this manuscript to be comprehensive, well-organized, and well-cited, offering a strong contribution to discussions around AI and digital transformation in mental health. It contained a thorough overview of current AI applications in mental health, and a balanced treatment of synchronous/asynchronous models, diagnostics, therapeutics, monitoring, and immersive technology. The quality of the English language was excellent, and I found no instances of grammatical or typographical errors.

 

I would make the following suggestions prior to publication:

In Section 3, consider more clearly distinguishing between AI-enabled and AI exclusive interventions.

In the sentence on predictive systems and stepped-care models (lines 141–142), a brief example of such a system (e.g., a specific app or program) could ground the discussion.

Consider briefly acknowledging the cost or scalability limitations of immersive technologies (line 144–147).

The final paragraph (lines 201–204) is key—consider expanding slightly on what “institutional capacity and stakeholder engagement” entails in practical terms.

To balance the description of Greece in Section 4, consider briefly noting a national model that’s making progress (e.g., Finland’s AI strategy or UK’s NHS AI Lab).

In Section 5, a brief mention of funding models or public-private partnerships could strengthen the feasibility aspect of scaling ethical AI.

The phrase "increasingly embedded" appears multiple times; consider rephrasing to avoid redundancy.

Author Response

We would like to thank Reviewer 2 for their detailed and constructive comments on our work. We integrated the suggested revisions in the new version of the paper; these are shown in yellow highlight. 

 

Reviewer 3 Report

Comments and Suggestions for Authors

This article could be considerably improved by revising it to a full fledged scoping review including with a PRISMA diagram.  

Author Response

We would like to thank the reviewer for their remark.

Comment 1: This article could be considerably improved by revising it to a full fledged scoping review including with a PRISMA diagram.  

Response 1: We have followed the PRISMA extension for scoping reviews for this work and we now mention it in the Methodology section.

Reviewer 4 Report

Comments and Suggestions for Authors

I would like to thank the authors for the opportunity to review an interesting piece of work, which will undoubtedly of interest to the readership of 'Computers'. The manuscript addresses the critical intersection of Artificial Intelligence (AI) and mental health, with a commendable focus on ethical and policy implications. The paper effectively argues for a value-driven approach to AI deployment in this sensitive domain. The authors provide a broad overview, covering AI applications, policy frameworks at international and national levels, significant ethical challenges , and strategic preconditions for ethical integration. They also excelled in their thorough examination of ethical concerns, including data privacy and consent , algorithmic bias and accountability , and digital exclusion. The discussion of "AI-mediated bureaucracy" is particularly insightful.

However, there are some areas which need addressing in order for the manuscript to be considered for publication:

  1. The methodology states the paper extended an original treatise with the PRISMA extension for scoping reviews (PRISMA-ScR). While the search strategy is described, providing more explicit detail on how the PRISMA-ScR checklist (e.g., specific items related to eligibility, charting, and synthesis of results beyond thematic analysis) guided the review process for this "strategic review" could strengthen the methodological rigor.
  2. The case studies of Woebot, Wysa, and Mindstrong are illustrative. This section could be enhanced by a more direct comparative synthesis at the end of section 3.1, perhaps explicitly drawing out common lessons these specific examples offer regarding the "tension between technical development and ethical infrastructure".
  3. Section 6 discusses "Values-Based Design" and mentions principles like autonomy and fairness. It could be beneficial to provide a brief example or two of how such high-level values can be practically translated into specific design features or choices during the development of an AI mental health tool, moving beyond the general call for interdisciplinary collaboration.
  4. The discussion calls for future research on institutional models to turn values into enforceable standards and adapt algorithmic impact assessments. Briefly suggesting what such novel governance models might entail, or pointing to nascent examples in other high-risk AI domains, could offer a more concrete starting point for the proposed future research.
  5.  Please double check all references to ensure these follow the format prescribed by the journal guidelines.

Author Response

We would like to thank the reviewer for their positive and insightful comments, which helped us improve the manuscript; such thorough and constructive reviews are very rare these days.

C1: "While the search strategy is described, providing more explicit detail on how the PRISMA-ScR checklist guided the review process for this "strategic review" could strengthen the methodological rigor."

R1: We have added the PRISMA scoping review information in the Methodology section and uploaded the checklist as an additional document.

C2: "The case studies of Woebot, Wysa, and Mindstrong are illustrative. This section could be enhanced by a more direct comparative synthesis at the end of section 3.1, perhaps explicitly drawing out common lessons these specific examples offer regarding the 'tension between technical development and ethical infrastructure'".

R2: We added a paragraph towards the end of Section 3, discussing the similarities and differences between the indicative case studies.

C3: "Section 6 discusses "Values-Based Design" and mentions principles like autonomy and fairness. It could be beneficial to provide a brief example or two of how such high-level values can be practically translated into specific design features or choices during the development of an AI mental health tool, moving beyond the general call for interdisciplinary collaboration."

R3: We added a paragraph in the values-based design subsection, including examples of how the suggested principles can be incorporated in mental health applications.

C4: "The discussion calls for future research on institutional models to turn values into enforceable standards and adapt algorithmic impact assessments. Briefly suggesting what such novel governance models might entail, or pointing to nascent examples in other high-risk AI domains, could offer a more concrete starting point for the proposed future research."

R4: We added a paragraph and two citations towards the end of the Discussion section to suggest relevant governance frameworks

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

I am fine with the revised manuscript, well done!

Author Response

Thank you for your comments and support!

Reviewer 3 Report

Comments and Suggestions for Authors

Although in your revision you have added in the methodology section mention of doing a scoping review, I do not see in your revision a full fledged scoping review with a related PRISMA diagram. That should revise your results, discussion, conclusion and references. 

Author Response

C1: "Although in your revision you have added in the methodology section mention of doing a scoping review, I do not see in your revision a full fledged scoping review with a related PRISMA diagram."

R1: Thank you for this comment; we have added the PRISMA scoping review diagram with the relevant information, uploaded the checklist as an additional file, and extended the methodology and discussion sections of the manuscript accordingly.

Round 3

Reviewer 3 Report

Comments and Suggestions for Authors

Now that this article has been revised to more clearly report a scoping review, the abstract should state that and the discussion should add related limitations. 

Author Response

C1: "Now that this article has been revised to more clearly report a scoping review, the abstract should state that and the discussion should add related limitations."

R1: Thank you for pointing out this omission on our part. We added a sentence to the abstract related to the use of the PRISMA methodology and introduced a subsection on Limitations as part of the Discussion section.

Back to TopTop