Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era
Abstract
1. Introduction
2. Methodology
3. Ethical Principles of Mental Health and AI
3.1. Autonomy and Informed Consent
3.2. Beneficence and Non-Malfeasance
3.3. Confidentiality, Privacy, and Transparency
3.4. Justice, Fairness, and Inclusiveness
3.5. Fidelity, Professional Integrity, and Accountability
4. Ethical Framework for Mental Health Professionals
4.1. Autonomy and Informed Consent
- i.
- Clinicians must disclose to the client whenever AI is used in their treatment, and this disclosure must include AI’s capabilities, limitations, potential impacts on their diagnosis, access to treatment, and cost implications.
- ii.
- Clinicians must provide information regarding the type of AI tools that will be used, their impact on the client’s treatment, how the data are collected, stored, and analyzed, and the role and involvement of third parties in the process if relevant.
- iii.
- Clinicians must be willing to allow the client to exercise their right to opt out of assisted AI treatments or decision-making processes and where feasible offer a human-based alternative.
- iv.
- Clinicians must ensure that the language used provides clear and understandable details about the use of AI to empower clients to give consent that is fully informed.
4.2. Beneficence and Non-Malfeasance
- i.
- The therapeutic relationship remains central to ethical clinical care, and therefore the use of AI must not be viewed as a substitute for human connection but rather as a complementary modality that enhances the therapeutic alliance in assessment, diagnosis, and treatment planning.
- ii.
- AI must only be explored as a complementary modality when the clinician determines that it is competent in understanding, interpreting, and explaining its results to the client and relevant stakeholders.
- iii.
- Clinicians must select AI tools that are culturally appropriate to minimize the perpetuation of inequities and are reliable, valid, and have evidence-based research support.
- iv.
- Clinicians must use AI tools to promote client well-being and minimize risk rather than justifying or contributing to discriminatory practices, which should include algorithms to audit, identify, and address ethical concerns or unintended risk.
- v.
- AI tools must adhere to professional and ethical standards and must be evaluated for accuracy and appropriateness on a regularly scheduled basis.
4.3. Confidentiality and Transparency
- i.
- The same confidentiality standards, e.g., compliance with HIPAA, federal and state laws, and relevant professional codes including secure data practices, ethical recordkeeping, encryption, and storage protection, must also apply to AI tools.
- ii.
- Clinicians must ensure compliance with confidentiality, privacy, and ethical standards, including HIPAA and relevant professional and legal regulations when third-party vendors or AI platforms are used.
- iii.
- Clinicians must be ethically accountable for the use of AI tools and must provide accurate information about AI capabilities and limitations, avoiding misleading claims.
- iv.
- Clinicians should advocate transparency by disclosing how AI models are developed and how algorithms are applied.
4.4. Justice, Fairness, and Inclusiveness
- i.
- Clinicians have a responsibility to ensure that AI systems do not disadvantage individuals or justify discrimination based on marginalized identities.
- ii.
- Clinicians must examine the AI system’s function, design, and output to ensure equitable and ethical application and evaluate assessments for cultural bias and fairness.
- iii.
- Clinicians must promote AI systems that enhance justice and safety for all clients and must advocate for inclusive and transparent design features that support ethical obligations of mental health professionals.
4.5. Fidelity, Professional Integrity, and Accountability
- i.
- Clinicians should only use AI tools when they have the training and competence to interpret the results responsibly and accurately and fully understand the implications, limitations, and the ethical use of AI.
- ii.
- Clinicians must stay informed regarding the best practices and risks regarding emerging tools by staying current with training in AI and digital ethics, in collaboration with AI technologists.
- iii.
- Clinical supervisors must model ethical AI use and guide supervisees in critically evaluating algorithmic tools in their clinical practice.
5. Case Study
5.1. Application of the Ethical Framework
5.1.1. Autonomy and Informed Consent
- i.
- The purpose of using the AI tool Wysa, i.e., to support the client between sessions.
- ii.
- The benefits and the limits of Wysa, emphasizing that the tool does not replace Dr. Emre’s role as a clinician.
- iii.
- Assurance that the information collected will be analyzed and stored securely and information of the extent to which the vendor Wysa has access to data for maintenance purposes.
- iv.
- Abdi has the option to opt out of AI support and continue with traditional therapy.
- v.
- If Abdi provides consent to use the AI tool, he has the option to revoke consent at any time.
5.1.2. Beneficence and Non-Malfeasance
- i.
- Complete training and professional development relevant to AI ethics and the interpretation of data that is specific to the Wysa platform.
- ii.
- Focus on rapport building with Abdi.
- iii.
- Review any notes generated by Wysa for cultural relevance and accuracy.
- iv.
- Ensure that no clinical recommendations are implemented without his or other clinical staff review.
- v.
- Review scientific evidence for Wysa’s validity and reliability and efficacy for multicultural and international populations.
5.1.3. Confidentiality and Transparency
- i.
- Informing Abdi who can have access to his AI data and for what reasons.
- ii.
- Restricting how AI data is shared and with whom and that the data is encrypted and stored on secure servers.
- iii.
- Discussing that not all of AI’s functions are transparent such as algorithms, which are the property of the vendor, but that every step has been taken to ensure the credibility, fairness, and security of Wysa as an AI platform.
5.1.4. Justice, Fairness, and Inclusiveness
- i.
- Verifying that Wysa as a generative AI tool has been trained on various linguistic datasets so that bias could be reduced.
- ii.
- Reporting bias to the vendor if he notices frequent errors that may be related to marginalized groups and advocates for regular equity audits for how Wysa may be performing across different demographic groups, ethnicities, and races.
5.1.5. Fidelity, Professional Integrity, and Accountability
- i.
- Staying current with AI developments through conference attendance and other continuing professional development and by completing training related to AI/digital ethics, and in the case of Abdi with the use of the Wysa AI platform.
- ii.
- As a mental health professional and as a clinical supervisor, Dr. Emre ought to model the ethical use of AI, especially when training future mental health professionals to critically evaluate and not be reliant on algorithmic outputs so that the client is not harmed by the inclusion of AI in the clinical setting.
- iii.
- Dr. Emre ought to also communicate when AI outputs ought not to be used and be transparent in his rationale.
6. Checklist for Clinicians Prior to AI Use
- i.
- Consent: Convey to the client what AI does, its limitations, how data will be managed or used, and allowing the option to opt out in lieu of offering a human option alternative.
- ii.
- Vett vendors: The AI platform ought to be tested and peer reviewed for efficacy or biases prior to its adoption.
- iii.
- Data security: Ensure that data is encrypted and stored securely and explained clearly to the client regarding how it will be used. The specifics of a Business Associate Agreement ought to be discussed in the consenting process.
- iv.
- Competence: Ensure that you have sufficient knowledge to interpret the AI outputs and understand the scope of the algorithms.
- v.
- Inclusivity: Evaluate whether the AI platform has been designed for diverse populations.
- vi.
- Transparency: Be clear with clients about what AI does and not do and remain accountable for the decisions made.
7. Limitations and Future Directions
8. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Grzybowski, A.; Pawlikowska-Łagód, K.; Lambert, W.C. A history of artificial intelligence. Clin. Dermatol. 2024, 42, 221–229. [Google Scholar] [CrossRef] [PubMed]
- Blinko. ChatGPT/OpenAI Statistics: How Many People Use ChatGPT? Available online: https://backlinko.com/chatgpt-stats?utm_source=chatgpt.com (accessed on 14 April 2025).
- Oliver Wyman Forum. Available online: https://www.oliverwymanforum.com/index.html (accessed on 25 May 2025).
- American Psychological Association. Ethical Principles of Psychologists and Code of Conduct (2002, Amended 1 June 2010 and 1 January 2017). Available online: https://www.apa.org/ethics/codes (accessed on 25 May 2025).
- American Counseling Association. ACA Code of Ethics. 2014. Available online: https://www.counseling.org/docs/default-source/default-document-library/ethics/2014-aca-code-of-ethics.pdf (accessed on 25 May 2025).
- National Association of Social Workers. Code of Ethics of the National Association of Social Workers. 2021. Available online: https://www.socialworkers.org/About/Ethics/Code-of-Ethics/Code-of (accessed on 25 May 2025).
- American Medical Association. AMA Code of Medical Ethics. Available online: https://code-medical-ethics.ama-assn.org (accessed on 25 May 2025).
- Sukhera, J. Narrative Reviews in Medical Education: Key Steps for Researchers. J. Grad. Med. Educ. 2022, 14, 418–419. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Snyder, H. Literature review as a research methodology: An overview and guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
- Ferrari, R. Writing narrative style literature reviews. Med. Writ. 2015, 24, 230–235. [Google Scholar] [CrossRef]
- Torraco, R.J. Writing integrative literature reviews: Guidelines and examples. Hum. Resour. Dev. Rev. 2005, 4, 356–367. [Google Scholar] [CrossRef]
- Baumeister, R.F.; Leary, M.R. Writing narrative literature reviews. Rev. Gen. Psychol. 1997, 1, 311–320. [Google Scholar] [CrossRef]
- Organization for Economic Co-Operation and Development. OECD Principles on Artificial Intelligence. 2019. Available online: https://oecd.ai/en/dashboards/policy-areas/ai-principles (accessed on 31 May 2025).
- UNESCO. Recommendation on the Ethics of Artificial Intelligence. 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 31 May 2025).
- US Department of Defense. DoD Adopts Ethical Principles for Artificial Intelligence. 2020. Available online: https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethicalprinciples-for-artificial-intelligence/ (accessed on 31 May 2025).
- UK Government. Ethics, Transparency and Accountability Framework for Automated Decision-Making. 2020. Available online: https://www.gov.uk/government/publications/ethics-transparency-and-accountability-framework-for-automated-decision-making (accessed on 31 May 2025).
- Google. AI at Google: Our Principles. 2018. Available online: https://ai.google/principles/ (accessed on 3 June 2025).
- Microsoft. Responsible AI Principles from Microsoft. Available online: https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3Aprimaryr6 (accessed on 3 June 2025).
- IBM. Everyday Ethics for Artificial Intelligence. 2018. Available online: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf (accessed on 3 June 2025).
- Future of Life Institute. Asilomar AI Principles. 2017. Available online: https://futureoflife.org/ai-principles/ (accessed on 3 June 2025).
- The Montreal Declaration of Responsible AI. 2025. Available online: https://montrealdeclaration-responsibleai.com/ (accessed on 3 June 2025).
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (1st ed.). 2019. Available online: https://ethicsinaction.ieee.org/ (accessed on 3 June 2025).
- European Commission. High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy, AI. 2019. Available online: https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/ (accessed on 3 June 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pillay, Y. Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era. Healthcare 2025, 13, 3057. https://doi.org/10.3390/healthcare13233057
Pillay Y. Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era. Healthcare. 2025; 13(23):3057. https://doi.org/10.3390/healthcare13233057
Chicago/Turabian StylePillay, Yegan. 2025. "Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era" Healthcare 13, no. 23: 3057. https://doi.org/10.3390/healthcare13233057
APA StylePillay, Y. (2025). Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era. Healthcare, 13(23), 3057. https://doi.org/10.3390/healthcare13233057

