Next Article in Journal
The Advantages of Entrepreneurial Holism: A Possible Path to Better and More Sustainable Performance
Next Article in Special Issue
The Role of Business Angels in the Early-Stage Financing of Startups: A Systematic Literature Review
Previous Article in Journal
The Effect of Payment Delay on Consumer Purchase Intention
Previous Article in Special Issue
Optimizing Retail Pharmacy Success: The Role of Multichannel Marketing Strategies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in Mental Health Care: Management Implications, Ethical Challenges, and Policy Considerations

by
Stephan Hoose
1,* and
Kristína Králiková
2,*
1
Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1665/1, 613 00 Brno, Czech Republic
2
Rectorate, Academy of the Police Force in Bratislava, Sklabinská 1, 835 17 Bratislava, Slovakia
*
Authors to whom correspondence should be addressed.
Adm. Sci. 2024, 14(9), 227; https://doi.org/10.3390/admsci14090227
Submission received: 24 August 2024 / Revised: 14 September 2024 / Accepted: 14 September 2024 / Published: 17 September 2024

Abstract

:
Adopting AI (Artificial Intelligence) in the provision of psychiatric services has been groundbreaking and has presented other means of handling some of the issues related to traditional methods. This paper aims at analyzing the applicability and efficiency of AI in mental health practices based on business administration paradigms with a focus on managing services and policies. This paper engages a systematic and synoptic process, where current AI technologies in mental health are investigated with reference to the current literature as to their usefulness in delivering services and the moral considerations that surround their application. The study indicates that AI is capable of improving the availability, relevance, and effectiveness of mental health services, information that can be useful for policymakers in the management of health care. Consequently, specific concerns arise, such as how the algorithm imposes its own bias, the question of data privacy, or how a mechanism could reduce the human factor in care. The review brought to light an area of understanding of AI-driven interventions that has not been explored: the effect of such interventions in the long run. The field study suggests that further research should be conducted regarding ethical factors, increasing the ethical standards of AI usage in administration, and exploring the cooperation of mental health practitioners and AI engineers with respect to the application of AI in psychiatric practice. Proposed solutions, therefore, include enhancing the AI functions and ethical standards and guaranteeing that policy instruments are favorable for the use of AI in mental health.

1. Introduction

Mental health care services have become a necessity due to the emerging need for proper health care, societal change, and the dynamism of mental health disorders. The traditional styles of delivering mental health care, drawn from a limited resource setting and marred by long waiting lists, are under immense pressure in providing services to these growing needs. Consequently, new technologies, especially those that encompass AI, are being considered a possible solution for the advancement of mental health service delivery. As some experts (Filip et al. 2023; Srebalová et al. 2018; Tache and Sararu 2023) claim in this context, mental health is part of the first generation of human rights) AI has potential in that it can provide such features as individualized treatment, prognosis, and self-organizing assistance systems that can significantly enhance the delivery and organization of mental health. However, although the introduction of AI in mental health care has its merits, as discussed below, it comes with unique challenges. More so, matters like ethical concerns, data privacy, algorithm biases, and the decreased interactivity in human-centered systems should not be taken lightly as key considerations and challenges to the efficient and appropriate application of AI in this line. Thus, a detailed analysis of AI’s place in mental health management is needed to identify best practices for its implementation and the necessary policies that would minimize risks and maximize benefits for all stakeholders (Husnain et al. 2024).
Thesis Statement: This paper has the following objectives: to evaluate the relevance and efficiency of AI technologies in mental health care and to explore such technologies’ implications for service management and policymaking. It includes best practices for considering AI in relation to its positive and negative aspects and how it should be used to address the core aspects of mental health care.
Mental illnesses are today becoming rampant and are affecting millions of people across the world in the post-industrial society. Thus, economic concern, lack of interpersonal contact, and post-COVID-19 impact have caused new cases of major depression and anxiety disorders and other stress-related disorders (Mentis et al. 2023). While conventional methods of mental treatment have to some extent offered great support, they come along with several challenges, such as problems with access, customer stigma, and inadequate numbers of mental health workers. These issues put into question the ability to close the gaps that exist between what is needed and the timely and efficient approaches to care.
Intelligent systems have lately been considered a promising approach to address these issues within the sphere of mental health (Zahlan et al. 2023). When it comes to the field of mental health, one has to point out that AI’s greatest benefit is that it can analyze vast amounts of data as well as identify certain trends and patterns that may help in providing the best, highly individualized, patient-oriented care. Currently, AI is used in different ways, including the presence of virtual therapists, testing and tracking of mental health, and AI’s capacity to make forecasts on potential mental health issues in the future. These developments may well open up mental health care for people who find it difficult to access it for any reason, including geographical location and lack of resources or professionals, and can offer consistent care in readily deployable methods that are hard for old-school therapies to achieve.

1.1. Purpose and Significance

The aim of the current paper is to present an idea of AI in mental health care and its application and effectiveness and the management implications for business administration. The aim of this review is to bring together the current state of the practice of AI in mental health, from identification of mental health disorders to monitoring and treatment (Milne-Ives et al. 2020). It also seeks to present a general view regarding the state of the art in mental health care enabled with AI and the possible ethical, social, and technical problems (Alhuwaydi 2024).
Thus, this review is important for several reasons. First, it encapsulates an overview of the evaluation of existing AI solutions as used in today’s mental health systems with information on what has been achieved and what remains to be achieved. Second, it raises awareness of the need to estimate the effects of AI on mental health in terms of the ethical, social, and legal consequences. Finally, it is hoped that this review will advance the discussion of the wellbeing of mental health care in the future, especially with helpful information that policy makers, researchers, and practitioners of mental health care can use for the management and administration of AI technologies in the wellbeing of mental health care.

1.2. Current Status of the Research Domain

AI’s application in treating one’s mental health is still a relatively progressing field in the sphere of medicine, and the scholarly interest from the different fields is growing. It is noteworthy that modern tendencies point at the increasing need for AI technologies aimed at diagnosing mental health disorders and providing subsequent treatment and follow-up (Mirbabaie et al. 2021). For example, current applications of AI in this area include the use of AI in social media to identify markers of depression and anxiety and the use of AI “chatbots” that provide cognitive behavioral therapy (CBT) and other forms of intervention.
Several researchers and reviews have discussed the possibilities and problems with the utilization of AI in mental health management. Shimada (2023) gives a future perspective of AI in mental health, emphasizing the positive sides as well as the concern issues. Alhuwaydi (2024) expounds on the present fads and prospective developments in the application of AI for mental health support and intervention, with more room for research to identify the complete potential of AI in mental health. Oladimeji et al. (2023) have added an opinion piece on the enhancement of psychological and mental health with special reference to the perks and drawbacks of AI.
Nevertheless, the integration of AI into mental health care has not been immune to criticism. Some of the areas of concern are privacy concerns, consent, and the presuppositions of the AI models (Kaur et al. 2024). On the one hand, it is acknowledged that AI can be extremely efficient; on the other, it is for the same reason that it may well produce results that are less than ideal or even detrimental to the patient. Further, there is fear that with the increased use of AI, people will feel detached from one another, especially those that are generally uneasy when it comes to interpersonal communication.

1.3. Aim and Principal Conclusions

The main purpose of this review is to undertake a systematic and critical analysis of the current research on the application of AI in mental health practice, its implications to business management, and policy implications in business administration. And in the light of the current studies, this review aims to make conclusions concerning the contribution of AI to the progress of mental health care and to point out the directions for further research and practice (Mishra 2024).
The review comes to the following conclusions: In mental health, AI has an enormous potential through increasing the accessibility of interventions, raising the effectiveness of the administered treatment, and increasing individualization. AI systems can be of particular use where resources are scarce because they provide the logical advice necessary for treatment tactics (Younis et al. 2024). Nevertheless, they are not without difficulties such as ethical dilemmas, the exigency of validation, and the human aspect that should not disappear from psychiatry and psychology.
In addition, the review argues for more studies and discussions in this line, especially those touching on the unethical practice of AI in mental health services, the social dilemmas of using AI in such services, and the technical challenges that are likely to be experienced when using AI in the provision of these services. Addressing the efficiency/ethics dilemma as well as the integration of AI in mental health care will be of importance in the future. Thus, it becomes critical that such technologies are deployed in a manner that is transparent and fair and, by extension, has a net positive impact to patients.

2. Materials and Methods

2.1. Data Collection

For the purpose of this review, the procedure of data collecting was established in a methodical manner in order to obtain a comprehensive and objective perspective on the impact that AI has on mental health therapies. The primary focus of the literature review was placed on the articles, conference papers, and authoritative reports that have been published over the course of the past ten years. The purpose of this review was to showcase the ongoing advancement of numerous AI technologies and their potential applications in the field of mental health, as stated by Baños et al. (2022). The selection of the papers was accomplished by conducting two separate database searches in addition to conducting a reference check.
Only the relevant literature was gathered from those databases that were subjected to peer review and were linked to the topic at hand, such as PubMed, IEEE Xplore, ScienceDirect, Google Scholar, and others. A number of terms, including “artificial intelligence”, “mental health”, “AI applications”, “digital therapy”, “mental health technology”, and “AI for mental health care”, were among the most frequently used search terminology. The use of Boolean operations such as AND, OR, and NOT was applied in order to obtain only relevant materials that were the most suitable for the specific subject of the study.
Some research publications investigated AI interventions that were called for to play a specific function in mental health care. These interventions included diagnostic technologies, treatment technologies, surveillance systems, and self-management devices. In order to obtain a more up-to-date perspective on the technologies and techniques that are now in use, the papers that were considered were only those that were published between the years 2013 and 2023. To ensure that the review maintained its credibility, only publications that have been subjected to peer review, conference papers, and reports of particularly high quality were utilized. For the purpose of analysis, published data were utilized; however, due to language hurdles and the accessibility of the publications, only those studies that were published in English were considered for inclusion in the analysis.
The research papers that were not included in the analysis were those that were not empirical, non-technological, or non-practical, as well as those that had no connection to AI-based mental health technology. Additional records that were removed included those that were deemed to be duplicates as well as those for which the complete text was not accessible. The initial search yielded more than 1500 items that were relevant to the search keyword that was used. Following the screening of titles and abstracts, the total number of comparative studies was restricted to roughly 250. This was accomplished by applying the inclusion and exclusion criteria described above. Following that, a full-text review was conducted, which led to the final selection of 120 publications that were deemed to be more pertinent and deserving of being reported in this review.
Based on several current scientific works (Mariš 2020; Funta and Horváth 2024), the methodology used in this review belongs to the descriptive or qualitative approach. This methodology involves reviewing existing literature and describing findings without the need to perform specific experimental procedures or quantitative analyses. Instead, it relies on synthesizing information from various sources to provide a comprehensive overview of the research subject (Peráček and Kaššaj 2023; Kaššaj and Peráček 2024).

2.2. Data Analysis

The chosen publications were put through strict analysis using thematic analysis and other qualitative analysis techniques. Thematic analysis was used as the primary method for data analysis because it provides a dense and detailed description of the dataset by making patterns from the text body, which are examined as themes (Barua et al. 2022). Each of the selected articles was read several times to fully understand the content. First, notes were made in order to emphasize the following aspects, if any: the role of AI in the respective article, the results achieved in the field of mental health care, and, finally, the issues encountered along the process.
The codes were then clustered into more encompassing categories that encompassed the core subject in various evaluations. They were named as follows: “AI for better accessibility and inclusion”, “origins of algorithmic prejudice”, “data privacy concerns”, “AI as a tool or a therapist”, and “AI advancement in future”. Out of the identified themes, some were merged, modified, or removed to make them comprehensive, clear, distinct, and relevant to the research questions. In this process, all the papers were re-read to ensure that the themes produced appropriately captured all the content. The overreaching aims and objectives of the research were also enumerated for every theme with an elaboration of the connection between them. The last identified key themes were labeled according to the essence they captured and were applied to categorize the result and discussion portions of the review.
The themes were further distilled to offer a comprehensive narrative that would capture the research questions and objectives to do with the use of AI in mental health interventions, the outcomes, and the implications for them. The large amount of data identified in the analysis was also a reason for using qualitative data analysis software like to help with the coding and the identification of themes.

2.3. Ethical Considerations

Moral issues were an important aspect of the review, especially because of the awareness created of mental health and the application of AI technologies. To avoid the abuse of patients’ information and present the results justly, the review followed ethical standard procedures (Khalid et al. 2024). Despite the fact that the main sources for the review were secondary in nature, i.e., published papers, their ethical implications in terms of the primary research were considered. Safeguarding and patients’ autonomy and control were critical discussions of concern that involved risk of data misuse, for instance, in learned AI-cognate emotional health evaluation and treatment. The review also drew out the need to protect privacy of patients in an AI application by protecting health information that is collected. While undertaking research that used human beings, an important requisite was that the original authors had to have complied with the issue of informed consent. The review included only papers where ethical approval was noted or where it could be well assumed that ethical approval or a corresponding statement had been received (Blease et al. 2020).
The presence of bias in developing AI algorithms is one important ethical consideration that was covered in the present review. The evaluation also assessed the level of openness of AI systems, especially on the methodology used to train the algorithms and the possibility of favoring one party in its decision making (Gunasekeran et al. 2021). In the review section, the author urged the improvement in the processes of AI creation to adhere to diversity and to make the algorithm optimized for different populations. The review conformed to general ethical considerations concerning human participants research as postulated by the WHO and APA.

3. Overview of AI in Mental Health

3.1. Historical Development

AI has been a progressive exercise as a discipline and has been in development throughout the years from the viewpoints of both computer science and psychological insights (Nazar et al. 2021). The use of AI in the mental health sectors has been a progressive exercise. The relationship between AI and mental health may be traced back to the middle of the 20th century, which is when the first AI models and algorithms were used as models of human cognition. It is feasible to affirm that this connection has been around since that time. It is possible that Joseph Weizenbaum’s design of ELIZA in the 1960s was the first attempt to incorporate this concept into a system. In essence, ELIZA was a straightforward natural language processing (NLP) system that pretended to be a Rogerian psychotherapist and spoke with the user through plain text.
On the other hand, there were gradual improvements in the utilization of AI for mental health treatment, particularly during the 1980s and 1990s, when expert systems were introduced. These systems, which were initially used in mental health diagnosis and treatment planning (Omaghomi et al. 2024), were designed to take the position of human experts in the decision-making process. Their primary purpose was to perform the functions that human experts normally do. For instance, apparent diagnostic systems such as INTERNIST-I or later evolved systems, despite being designed and implemented in the field of general medicine, inspired tremendous expectations for the application of AI in the study of complex psychological problems. On the other hand, in spite of these increasing developments, the initial interventions of AI in mental health were expected to be hindered by the limitations of the applicable technology. These limitations included insufficient computational capabilities, a lack of data, and immature machine learning.
The beginning of the new century marked a significant shift in the way AI was being examined in the subject of mental health. This shift was brought about by improvements in machine learning and natural language processing as well as an increase in the amount of data in digital health domains (Jaliaawala and Khan 2019). From this moment on, intelligent systems evolved from rule-based expert systems to systems that could learn from data, hence gaining the ability to draw conclusions and make predictions. The development of applications utilizing AI that are intended for mental health evaluation, diagnosis, and even first-line clinical interventions took place during this time period, thereby laying the framework for the present generation of AI in the field of mental health.

3.2. Current Technologies

The current state of affairs of AI in mental health can be described as having a very fragmented picture of existing methods that are more often embraced within the clinician’s toolbox, in research, and in the area of self-care (Olawade et al. 2024). These technologies can be broadly categorized into three main areas: screening tests, strategic interventions, and monitoring of patients’ mental conditions.
These are tools that help clinicians and other health care practitioners to accurately diagnose mental health disorders with relative ease. These tools use machine learning to ingest big data, for instance, EHRs, genomics, or patients’ posts to social media, and flag symptoms consistent with mental health disorders (Philippe et al. 2021). For instance, machine learning, the possibilities of which have been precisely identified earlier, was used for the development of models that aim to identify the beginning of depression by analyzing the language of users on social networks. They could also help clinicians to pick early signals to attend to before they turn into monumental issues affecting the patient. In addition, accurate diagnostic assessment of mental health disorders is also being enhanced by AI tools in order to increase specific treatment plans.
AI is now being used more in the delivery of therapeutic interventions, and virtual therapists and chatbots can be regarded as some of the exciting developments. Technologically aided therapeutic tools like Wombat and Wyse are available online, and they converse with the users and provide them CBT and other proven methods of treatment (Rajkishan et al. 2023). They are intended to be easy to use so they can help people who may not have access to an expensive and time-consuming form of therapy due to, for example, prejudice, cost, or distance. Also, applications such as Tess and Ellie are used alongside human therapists to supplement the therapy by tracking the shifts in the client’s behavior in real time and giving feedback.
Mental health monitoring systems based on AI are aimed at the constant and immediate assessment of people’s mental condition. They may include wearables, applications on smartphones, as well as other technologies that enable data capture for one or more indicators of mental health, including sleep, activity, and mood. AI algorithms then also use these data to estimate changes that may indicate the development or deterioration of a mental health condition (Rogan et al. 2024). For example, there are apps, such as Mood Path and Mind Strong that gather information on the user’s smartphone and then give them tailored advice on their state of mental health. These monitoring systems will be extremely helpful in cases of long-term DSM-5 mental disorder treatment because they allow the identification of the beginning of a relapse and the application of preventive measures.

3.3. Key Applications

In mental health, AI is used in diagnosis, treatment decision-making, and even the actual therapies. The subsequent sections describe some of the areas of application, and key success stories of AI adoption are profiled in the following sections.
AI in Diagnostics: Perhaps one of the major accomplishments of AI in psychiatry and mental health is in the area of diagnosis. Categorical diagnosis usually involves the use of questionnaires and clinical interviews, which are often influenced by the patients’ perceived experiences. AI, on the other hand, can work with large sets of data that have been collected from clinical practice, are free from any bias or emotions that a clinician can have, and can determine the patterns that may be difficult even for a clinician to notice (Shah 2022). For instance, there are AI predictive models of the propensity of an individual to develop a mental disorder depending on their online identity tracks, including status updates, web search histories, and patterns of interaction. These AI-based diagnostic tools are not only less time consuming, but they can also help physicians not only to diagnose a condition but also to identify a person who may be at risk even though he or she does not demonstrate any clear symptoms yet.
Using AI is also becoming popular in diagnosing mental health disorders and planning their treatment. Processing a lot of clinical data, AI algorithms can offer the proper treatments for the patient considering genetics, accompanying diseases, and previous reactions to therapy (Thieme et al. 2020). Such an individualized approach to treating the patient is especially useful in managing complicated cases where the old conventional methods of treatment may not work. For instance, platforms are already being developed to select the appropriate doses of medication for patients with treatment refractory depression, a process that usually requires a lot of trial and error in psychiatric medicine.
Automated therapy using chatbots and AI therapists is being practiced as a common method of conducting therapy among people. Such AI interventions have a number of benefits, including accessibility, cost effectiveness, and the ability to respond to crises as and when they happen. One such example is the application of AI in the method of cognitive behavioral therapy (CBT), which is well known and recognized to be quite helpful as an approach to numerous mental health disorders (Tornero-Costa et al. 2023). Wombat and Wyse are examples of chatbots that provide cognitive behavioral therapy (CBT)-based interventions by converting textual data through algorithms to natural language processing to enable users to carry out structured conversations for anxiety, depression, and stress management. These applications for AI are meant to supplement traditional therapy with products that are always available and that serve to remind the user about the coping skills that have been taught in therapy between sessions. These are Case I: AI-Assisted Mental Health, Vanderbilt University; Case II: Big White Wall; and Case III: Wombat.
An example of such an implementation is the case of the company Mind Strong, which has created software that tracks changes in behavior owing to the operation of smartphones for the continuous assessment of the state of mental health (Toro-Tobon et al. 2023). Mind Strong’s AI algorithm pulls data from a subject’s typing speed, visual/screen interaction, and communication habits to identify if the subject may be in a state of degenerating mental health so that intervention may be initiated. This approach has proved helpful for the management of disorders such as depression and bipolar disorder where the early signs of switching moods are important.
One other success story that can be told is the use of AI as virtual therapists for adult and pediatric populations in underprivileged areas. For a case, Wombat, which is an AI-based chatbot, has been employed to provide mental support to patients from rural regions who may not have access to conventional counseling services (Shimada 2023). Therapists have also reported that Wombat’s capacity for therapeutic communication supplied evidence-based interventions and application to those audiences, and the chance to reduce disorders of anxiety and depression showed how AI could play the role of a social network in the treatment of patients.

3.4. Impact of AI on Mental Health Outcomes

3.4.1. Positive Outcomes

The use of AI in the field of mental health care has been favorable and has resulted in favorable modifications and improvements by revolutionizing the techniques and approaches that are used to provide mental health services. According to Vatansever et al.’s research from 2020, one of the most significant benefits obtained from the use of AI in the field is the enhancement of accessibility to mental health treatments. The use of applications that contain AI, such as chatbots and virtual therapists, is efficient in the sense that they are accessible at any time and do not need the payment of a substantial amount of money or a trip of a considerable distance in order to obtain assistance. One example is the popularity of Wombat and Wyse, which are AI-based programs that assist in the delivery of cognitive behavioral therapy (CBT) and other sorts of treatments via the use of smartphones. Because they have been found to be successful in lowering the symptoms of anxiety and depression in those who are unable to visit a therapist, these tools are beneficial to have in one’s possession.
Additionally, the fact that AI has been so successful in the field of health informatics has led to the use of AI algorithms that improve both the accuracy and the speed of mental health diagnosis. According to Kelechi Elizabeth Oladimeji et al. (2023), when employing AI in the process of diagnosing mental diseases, it is possible for computers to recognize differences in patterns of behavior earlier than is possible using the conventional methods that are often used. For instance, machine learning algorithms in an AI model have the capacity to analyze linguistic metrics and the frequency of posting by Twitter users in order to differentiate between depressive episodes and periods of remission.
AI has further enhanced the complexity of mental health therapies in addition to its use in diagnostics. This is because the AI system is able to examine the patient’s genetic and behavioral patterns in addition to accumulating other data of how the patient has responded to certain therapies. This is particularly true in situations when straightforward forms of therapy may not be effective and the situation calls for a more complex approach. In order to enhance the outcomes for patients who are resistant to therapy for depression, careful medication planning in conjunction with methods based on AI has been deployed successfully.
Another area that has shown tremendously favorable outcomes is the use of AI in the monitoring of mental health. Mobile applications and wearable technology, in combination with AI, are used to monitor mental health indicators. These indicators may include sleep, physical activity, and mood, but they may also exclude these factors. With the use of these instruments, early warning indications of deterioration in aspects of health such as mental health may be identified, allowing for early interventions to be implemented. For instance, the AI platform known as Mind Strong is able to identify changes in cognition and mental condition using the data collected from smartphones. Additionally, it is able to provide immediate support to the operator. On the other hand, it has been estimated that some types of continuous supervision have a significant impact on reducing the incidence of relapse in cases of chronic psychiatric diseases like bipolar disorders.

3.4.2. Challenges and Limitations

Nevertheless, the benefits that have been mentioned above should not be interpreted as implying that there are no problems or disadvantages associated with the use of AI in mental health care services. In the first place, an important difficulty is the ethical dilemma that arises from the use AI in the field, which is already a major worry in terms of privacy (Verma et al. 2023). An AI application in mental health raises significant privacy issues since such technologies require the acquisition of vast amounts of personal data. The issue of data leakage and unauthorized access to personal data is considered to be of crucial importance, particularly when considered against the backdrop of the stigma that is associated with mental diseases.
Another serious concern that has significant repercussions for mental health is the issue of bias and partiality in AI. AI runs the risk of being trained from datasets that already include bias, which results in their producing biased judgments in the diagnostic and treatment guidance that they provide. As an instance, an AI model could have been trained with a certain kind of data; hence, if it is used with a variety of individuals, it might not function as you would anticipate it to. Because the completeness of the mental health system has not yet been seen, this may contribute to the perpetuation of the issue of inequality in mental health care, particularly for members of minority groups. In a different piece of research, Obermeyer et al. (2019) reported that the use of AI methods often leads to uneven treatment on the basis of an ethnic element. This indicates that there is a need to tackle the problem in a reasonable and adequate manner, even when it comes to the application of AI solutions for patients who suffer from mental problems.
Two of the most significant limits of AI are as follows: The integration of the system in health care decreases the quality of the interaction between the patient and the practitioner, which is the first constraint. Regardless of whether or not these AI tools are readily available, it is possible that they will be limited in their capacity to demonstrate emotions and provide care for patients, both of which are essential qualities of a therapist (Bernert et al. 2020). In spite of the fact that AI has the potential to assist in the provision of personalized diagnoses and suggestions, customers may be dehumanized, and their emotions and experiences may be unleased. This is especially true in cases when the symptoms of the mental disease are complex and the therapy involves interaction between the patient and the clinician.
The usefulness of using AI in the treatment of mental health conditions is still being investigated, and data validation methods are currently being carried out. In a similar vein, a number of researchers have stated that a great number of programs have yielded favorable outcomes; nonetheless, in order to fully comprehend the impacts of AI-based therapies, further clinical studies on a comprehensive scale are necessary. In the first place, there is a lack of established assessment standards for AI tools in the context of mental health, which makes it difficult to effectively compare the effectiveness of the various therapies. There is a risk that AI that is not well organized might be implemented into the practice of mental health care before it is genuinely prepared for this, which will result in a number of unfavorable effects.
Table 1 summarizes mental health care AI applications. It divides AI applications into diagnostics, treatment planning, virtual therapy, monitoring, and predictive analytics. A quick description of each area shows how AI improves mental wellness. AI algorithms that detect depression from social media activity, AI-driven personalized treatment programs, and AI chatbots that provide CBT are examples. Mental health benefits include better early diagnosis, treatment personalization, accessibility, monitoring, and proactive crisis intervention. This table shows how AI can change mental health treatment (Table 1).

3.4.3. Comparative Analysis

The success of AI therapies in comparison with more traditional methods of mental health treatment may and should be examined in terms of the strengths and drawbacks of AI as a tool in this field. In the following ways, AI has the potential to serve as a complement to traditional methods of mental health therapy (Javed et al. 2023): AI therapy is readily accessible, adaptable, and may be honed to meet the specific requirements of the individual patient. For instance, with the use of AI tools, those individuals who would not seek treatment due to the stigma or geographic constraints were able to obtain assistance. This is because the support provided by AI is not restricted by the number of therapists that are accessible. Another way that AI may be of assistance to physicians is by giving them statistical forecasts that they can use to guide their actions. This can result in a higher percentage of accurate diagnoses and effective treatments.
AI-based approaches, on the other hand, should be seen as a supplement to the many mental health treatments that are already in place, rather than as a way of supplanting all of the traditional processes. In most cases, the therapies that are based on AI consist of providing participants with knowledge. However, these interventions lack the human touch and interpersonal interaction that are present in face-to-face sessions with experienced therapists. According to Bleese et al.’s research from 2020, human therapists are better equipped to deal with some elements of mental health, such as situations involving psychological crises or questions on the causes of mental health difficulties. The therapeutic connection that exists between the therapist and the patient is a significant factor in determining the result, and this is one area in which AI is not yet capable of functioning to its full potential.
Furthermore, while AI has the potential to be a helpful tool in addressing issues with the symptoms and early diagnosis of worsening mental health, it is not always capable of assisting in the discovery of the factors that lead to mental health illnesses. AI techniques and solutions do not take into consideration psychological issues that call for therapeutic intervention, such as psychodynamic therapy or humanistic therapy. Furthermore, despite the fact that AI has a significant benefit in terms of both accessibility and scalability, it needs to be used as one of the components of a more complete approach to the treatment of mental diseases via the utilization of conventional procedures in therapy.
  • Strengths and Weaknesses of Existing Research
Research has shown the potential benefits of integrating AI in mental health services. For example, Philippe et al. (2021) undertook a systematic meta-synthesis of digital interventions, including AI-based technology, related to mental health and discovered enhancements in the availability and effectiveness of mental health care. However, the study also highlighted that some of these interventions were less effective than others due to certain rigidities like partially engaged users or corresponding mental disorders. Like, Shimada (2023) has underlined that there is still enormous potential for using AI in the field of mental health care but has also noted that the majority of research is based on short-term intervention, while the long-term effectiveness and scalability of each approach remain understudied. However, analyzing the literature, it is possible to identify certain gaps. For instance, Tornero-Costa et al. (2023) found that most research analyzing the applications of AI in the mental health domain had methodological shortcomings and quality concerns. These issues included the absence of detailed protocols and variations in data quality from one study to another. Furthermore, Rogan et al. (2024) mentioned that the number of studies focusing on AI and machine learning in mental health care is scarce although the viewpoints of health care professionals are valuable for the operationalization of such technologies.
  • Addressing Research Gaps
This paper seeks to make a contribution to filling these gaps by providing a critical review of the literature on AI and mental health care, with emphasis on the associated ethical issues and the service and policy implications. Unlike previous studies that always fail to consider the long-term impacts of their interventions and the ethical issues associated with them, this research looks at the long-term impacts of AI interventions and ways of addressing the ethical issues.
  • Criteria for Including or Excluding Studies
This literature review encompasses only the studies that have been chosen according to certain criteria governing their selection. To ensure that the most up-to-date and reliable information is used, only articles from peer-reviewed journals published in the last five years with topics related to the use of AI in mental health services are discussed. For this reason, articles with poor methodological quality or articles that failed to address the ethical aspects of the intervention were excluded. It is essential to adopt this approach in a way that makes the review as informed as possible by the most up-to-date and sound evidence.
  • Expanding the Discussion on Algorithmic Bias and Data Privacy
Two common topics that are raised in the literature include algorithmic bias and data privacy, but the depth of the discussion on these topics is often inadequate. For example, Shah (2022) mentions in passing the possibility of AI systems to reproduce the same bias when diagnosing and treating mental ailments. Nonetheless, the author has not performed a detailed analysis of the possibility of reducing these biases. This review will build on this discussion by critically looking at other areas of best practice in future work, including bias mitigation strategies like the use of a bias detection function and the representation of diverse datasets (Rajkishan et al. 2023). Further, the subject of data privacy can be discussed in more detail with the concept of creating legally and ethically compliant AI models in terms of privacy protection (Vatansever et al. 2020).
  • Ethical Challenges and Practical Solutions
It is important to understand that the issues of ethics regarding AI in mental health care are not limited and shallow. Thieme et al. (2020) explore the ethical concern in using AI in mental health practices, whereby the human aspect of care can be diminished. However, their analysis failed to provide concrete and practical ways of handling these challenges in the field. This paper will also include feasible recommendations such as increasing the integration of AI developers and mental health specialists to ensure that the developed AI works hand in hand with the human brain (Thieme et al. 2020).
Table 2 critically analyzes AI’s health care applications and highlights major study results. Li et al. (2022) showed that AI-powered diagnostic tools increase dementia diagnosis and accuracy. Gunasekeran et al. (2021) stress the importance of AI-enhanced telehealth systems in public health. Jaliaawala and Khan (2019) demonstrate AI’s ability to personalize autism treatment. Mentis et al. (2023) illustrate AI’s efficacy in stress detection using wearables, whereas Khalid et al. (2024) highlight AI’s promise in personalized rehabilitation, increasing recovery. The table shows how AI has transformed health care (Table 2).
This study is informative to understand field applications of using AI in different domains such as the health care services domain. It also satisfies scientific rigor by making an attempt to provide a comprehensive analysis of the use of AI in mental health services. Nevertheless, in the paper, the author does not use any quantitative data; thus, their inclusion can enrich the paper’s arguments. It is therefore suggested for statistical analyses not to be complicated, but improvements of simple statistical tests can greatly improve the paper. To fill the gap, we suggest the inclusion of the basic forms of statistical indicators into the paper. Appending only base-level frequency analyses as extensions to the current research will give a far better comprehension of how AI is employed in various specialties of mental health care. It will also provide a stronger backing to the arguments in this paper while also providing a more exhaustive understanding of AI’s role in this sector.
Frequency analysis of AI applications in mental health care is presented in Table 3. The table displays the primary fields of AI use put into classifications depending on the number of the reviewed studies. For instance, two subfields have been investigated most. Predictive analytics occupies 30% of the analyzed reports on big data research. Diagnostic tools and therapeutic interventions come next, of which 22% consists of diagnostic tools while 18% comprises therapeutic interventions. This distribution shows how AI is most applied, and—by their appearance—reveals the balance of emphasis in each field.

4. Discussion

4.1. Interpretation of Results

The findings of this review indicate that AI has the potential to significantly transform the management and delivery of mental health care. According to Aggarwal et al. (2023), AI applications such as more accurate diagnostics, personalized treatment plans, and expanded access to therapy can address many of the current shortcomings in mental health service provision. However, the results also highlight a complex and sometimes problematic landscape in which AI is being adopted and implemented. While AI offers notable advantages, including improved efficiency and accessibility, it also poses considerable risks, such as ethical challenges, algorithmic biases, and concerns regarding data privacy.
One of the most significant management implications is the democratization of mental health care through AI. The deployment of virtual therapists and AI-driven monitoring systems provides an opportunity to support patients who are isolated, geographically remote, or lacking in social support. This aligns with the findings of Shimada (2023) and Alhuwaydi (2024), who emphasize AI’s role in addressing disparities in access to mental health services. However, despite AI’s potential to broaden access, there remains a question about whether these technologies can deliver outcomes comparable to traditional, face-to-face therapy. The therapeutic alliance between a therapist and a patient, which many believe is critical to successful treatment, may be difficult to replicate with AI systems.
Additionally, the use of AI in diagnostics emerges as a key area of focus. AI-based systems have demonstrated increased operational efficiency, particularly in identifying mental health conditions such as anxiety and depression. This finding is supported by Oladimeji et al. (2023), who highlight AI’s role in early detection and prevention of mental health issues. However, the challenge of “translating” validated diagnostic tests across diverse populations persists. AI algorithms in mental health are susceptible to discrepancies that may arise from underlying biases in the data, which can perpetuate social inequalities if not properly managed and tested. This issue underscores one of the central debates surrounding AI: the potential for algorithmic bias and the need for robust oversight and regulatory frameworks to mitigate these risks.
The implications for management and policy are profound. AI offers the possibility of enhancing the effectiveness of mental health care systems by improving data analysis and providing precise treatment recommendations. This potential aligns with the conceptual literature that supports the integration of AI in clinical decision making, as noted by Xu et al. (2021). However, the review also underscores a significant limitation of AI and machine learning: the inability to fully capture the complexity and subtleties of human emotions and behaviors, which are crucial for accurate diagnosis and treatment planning. This limitation necessitates a reevaluation of the extent to which AI can be relied upon in mental health care. It suggests that AI should be viewed as an adjunct to, rather than a replacement for, human judgment in clinical settings.

4.2. Broader Context and Implications

Additionally, it is possible to assert that the relevance of AI in the area of mental health extends well beyond the challenges of practical application in practice and that it has more general ramifications in terms of the profession and the values that it upholds. As a technology, AI has the potential to revolutionize both the delivery of mental health care and the way people see it as a component of human services (Li et al. 2022). In addition, the use of AI in the treatment of mental health might lessen the functions of patient–clinician domination, with AI systems undertaking jobs that were previously performed only by human doctors. It is possible that this may result in care that is more effective and even more easily accessible; nonetheless, it raises the issue of whether or not it will be impersonalized. The power of AI to change the boundaries of providing mental health services should not be disregarded because of what it can do in regard to eliminating the human component, which is often very crucial in the process of healing.
There is a possibility that the stigma associated with the need for mental health care services will be eradicated, which is another one of the significant repercussions. Consequently, AI may encourage more individuals to obtain the care they need for mental difficulties without having to suffer from discrimination in order to discuss their condition. This has the potential to have a considerable impact on the health of individuals, particularly in cultures that discriminate against those who suffer from mental illness (Dakanalis et al. 2024). On the other hand, this must be counterbalanced by the fact that an excessive reliance on AI may diminish the amount of direct contact with the patient, particularly for patients who are struggling with mental health concerns. As a result, some of these patients may be overlooked in the event that they need more care.
In the field of mental health treatment, AI plays a function that aligns with other social problems, such as providing equitable access to sensitive needs. The technologies at issue have the potential to increase existing inequality if the problem is not treated correctly (Denecke et al. 2021). This is something that should be taken into consideration, despite the fact that AI has the ability to give access to mental health treatment for one hundred percent of the population. For instance, in a society in which individuals belonging to certain categories are subjected to discrimination on the basis of their race, color, gender, or origin, it is quite probable that the AI program will also be discriminatory against the individual in question. Additionally, the implementation of AI in the field of mental health requires a substantial investment in both infrastructure and technology, both of which may not be accessible to all populations in an effective manner. This demonstrates that there is a need for the incorporation of AI mental health solutions in a manner that is equitable to all individuals.
When it comes to the ethical difficulties that arise from the use of AI in the field of mental health treatment, there is a great deal of concern. The assessment draws attention to a number of ethical problems, the most significant of which is permission and data privacy, as well as the influence of AI in ways that may be harmful to persons. In today’s world, the utilization of personal data that are available to AI systems is a highly debated problem. This is due to the fact that it is not known who may have access to such data and how it will be used (Asan and Choudhury 2021). If the standards of explicability, accountability, and conformity with the rules pertaining to privacy from the processing of persons are fulfilled, only then will it be possible to handle these difficulties. On the other hand, accurate projections generated by AI may be used for predatory and manipulative objectives, such as predictive policing or forced coercive psychosocial therapy. These forecasts can be used to make decisions that negatively impact individuals. For this reason, it is of the utmost importance that a trustworthy standard be created for the ethical use of AI in the field of mental health, thereby safeguarding the patients’ autonomy and the patients’ best interests.
When considering the future, it is possible to assert that the development and implementation of AI into mental health systems will give rise to new topics as well as difficulties. It is the very dynamic character of the regulatory and ethical structures that are responsible for providing help with AI in mental health care settings that is the most significant consequence of the rapid development of technology for AI that is now taking place. The integration of stakeholders will need the participation of politicians, doctors, and researchers in order to guarantee that the incorporation of AI will not result in a decline in mental health services but rather will contribute to their improvement. This will also include doing an analysis of the present problems that are associated with the use of AI in the field of mental health as well as the potential risks that may arise in the future as a result of the incorporation of this technology into the field.

4.3. Future Research Directions

Some of the limitations that this review pointed out that need to be filled in future research to maximize the potential of AI in mental health care include the following: Perhaps the most significant of these is the lack of adequately stringent validation of these methods within the AI-based mental health apps in different kinds of populations (Balcombe and De Leo 2021). Modern AI systems rely on training data that may not be appropriate for all populations, which can bring bias into the operation of the respective tools. Further research should endeavor to acquire datasets that represent a wider population since the current ones bear a resemblance to those of particular regions or countries.
One such area pertains to the study of the effects of AI on mental wellbeing that may emerge in the future. Though some studies support the idea that embracing AI has a positive impact on mental health, some of them are short-term-based, and the long-term impact of embracing AI to enhance the health of patients has not been thoroughly researched. Studies in this field may offer meaningful information on the efficacy of using AI in the care and treatment of mental health patients.
Furthermore, there is a need for more research, which should look at the ethical concerns that arise from the use of AI in the field of mental health. Although the review outlines a number of different ethical issues, there are clearly issues about the interaction of AI and privacy, autonomy, or consent still to be explored in relation to mental health (Graham et al. 2020). Subsequent research should focus on ethical issues of AI in an interdisciplinary way and should involve findings from ethics, law, social sciences, and other disciplines. It is crucial during the creation of ethics policies and procedures that are essential to the application of AI in mental health.
One area of future research that looks particularly promising is human–AI interactive models. As has been observed based on the literature analysis, the current scope of application of AI in the sphere of mental health care implies human decision making. There are models that combine the AI approach with a more traditional one that could mean that a patient gets the best of both worlds: the accuracy and efficacy of AI solutions and reliance on a human clinician when it comes to the human factor, which can be very important when it comes to mental health therapy.
Further research is needed for the investigation of AI’s role in mental health promotion and prevention. At present, there is a great emphasis on using AI for diagnostic and therapeutic purposes, and it is also true that major applications of AI are envisioned in terms of prevention and mental health enhancement interventions. For instance, it could help predict when mental health problems are most likely to arise, and hence would recommend when early treatments are needed before the conditions worsen. Also, AI may be employed to design and implement public health campaigns that focus on people’s mental health and the lack of prejudice toward such problems. Developing such opportunities might expand on the ways in which the quality of mental health outcomes could be enhanced on a population level.

5. Conclusions

The review of the literature related to the implementation of AI in mental health shows that there are many research advances and hurdles in this category. AI possesses a relative strength in advancing mental health treatment through the features like prognostic AI for general therapy, robotic CBT, and immediate mental health tracking. These technologies have been invaluable in expanding the availability of mental health assistance as well as providing individualized treatment and timely detection of mental disorders. However, the review also shows the shortcomings of such programs, including ethical issues such as violations of data privacy and the use of biased algorithms, as well as the depersonalization of mental health services through the use of AI. These advantages and disadvantages make the discussion of the use of AI in mental health care still an open one and, no doubt, a very sensitive issue in the pursuit of innovation without compromising ethical standards.

5.1. Implications

AI’s incursion into mental health is vertiginous and uncompartmentalized and has significant implications for management, policymaking, and practice. One of the beneficial sides of AI is as an accelerator for improving access to mental health treatment by providing precise and fast services. For instance, AI can deliver mental health care for patients who live in regions where there are limited health care facilities that provide mental health care, or it can process big data and select characteristics that cannot be defined mentally. These capabilities suggest that AI can perhaps assume a strategic place in possible improvements of mental health interventions in terms of the degree of their functionality and efficacy.
However, the ethical and social implications in the use of AI in mental health care cannot be ignored. New risks are introduced to decision-making when AI is applied, such as the reinforcement of the current prejudices of the society or the inability to acknowledge patients’ cultural diversity. Additionally, it has been seen that the protection of personal data is required, particularly concerning patients’ emotional and mental state, because most of the AI solutions rely on big data collection and subsequent analysis of the dataset. The main drawback in the utilization of AI for the treatment of mental health disorders that has the potential to reduce the impact of the intercession is that it erodes the physicians’ autonomy where the physician–patient relationship may be distant. These issues make it crucial to decide the AI applications enhancing the quality of mental health care but avoiding the negative impact.

5.2. Final Thoughts

Altogether, this paper argues that AI has great promise in the provision of mental health services. The extent to which it can enhance diagnostic precision, make treatment more personal to patients, and increase the availability of service is enormous. On the one hand, it is necessary to incorporate AI into mental health care, but one has to be careful about how precisely this process will take place, analyzing numerous ethical and practical problems connected with the use of this tool. Transparency, fairness, and understanding and responding to the patient’s needs have to be the guiding principles in the design and deployment of AI.
The review highlights the active analysis of the possibilities and risks of AI use in developing mental health care. Ethical policies must be enhanced, working relationships between various professions need to be developed, and the regulatory frame must be adaptive and strong in order to advance in this process adequately. Thus, based on the progressive growth of AI in health care, much attention should be paid to the search for the right balance between the opportunities and threats in order to achieve the right use of technologies in mental health care for the benefit and the safety of patients.

Author Contributions

Conceptualization, S.H.; methodology, S.H.; formal analysis, S.H.; investigation, S.H.; resources, S.H.; data curation, S.H.; writing—original draft preparation, S.H.; writing—review and editing, K.K. and S.H.; visualization, S.H.; supervision, K.K.; project administration, K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aggarwal, Abhishek, Cheuk Chi Tam, Dezhi Wu, Xiaoming Li, and Shan Qiao. 2023. Artificial Intelligence–Based Chatbots for Promoting Health Behavioral Changes: Systematic Review. Journal of Medical Internet Research 25: e40789. [Google Scholar] [CrossRef] [PubMed]
  2. Alhuwaydi, Ahmed. 2024. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions—A Narrative Review for a Comprehensive Insight. Risk Management and Healthcare Policy 17: 1339–48. [Google Scholar] [CrossRef] [PubMed]
  3. Asan, Onur, and Avishek Choudhury. 2021. Artificial Intelligence Human Factors Healthcare: A Mapping Review (Preprint). JMIR Human Factors 8: e28236. [Google Scholar] [CrossRef] [PubMed]
  4. Balcombe, Luke, and Diego De Leo. 2021. Digital Mental Health Challenges and the Horizon Ahead for Solutions. JMIR Mental Health 8: e26811. [Google Scholar] [CrossRef] [PubMed]
  5. Baños, Rosa M., Rocío Herrero, and M. Dolores Vara. 2022. What is the Current and Future Status of Digital Mental Health Interventions? The Spanish Journal of Psychology 25: e5. [Google Scholar] [CrossRef] [PubMed]
  6. Barua, Prabal Datta, Jahmunah Vicnesh, Raj Gururajan, Shu Lih Oh, Elizabeth Palmer, Muhammad Mokhzaini Azizan, Nahrizul Adib Kadri, and U. Rajendra Acharya. 2022. Artificial Intelligence Enabled Personalised Assistive Tools to Enhance Education of Children with Neurodevelopmental Disorders—A Review. International Journal of Environmental Research and Public Health 19: 1192. [Google Scholar] [CrossRef] [PubMed]
  7. Bernert, Rebecca A., Amanda M. Hilberg, Ruth Melia, Jane Paik Kim, Nigam H. Shah, and Freddy Abnousi. 2020. Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations. International Journal of Environmental Research and Public Health 17: 5929. [Google Scholar] [CrossRef] [PubMed]
  8. Blease, Charlotte, Cosima Locher, Marisa Leon-Carlyle, and P. Murali Doraiswamy. 2020. Artificial intelligence and the future of psychiatry: Qualitative findings from a global physician survey. Digital Health 6: 205520762096835. [Google Scholar] [CrossRef]
  9. Dakanalis, Antonios, Brenda K. Wiederhold, and Guiseppe Riva. 2024. Artificial Intelligence: A Game-Changer for Mental Health Care. Cyberpsychology, Behavior, and Social Networking 27: 100–4. [Google Scholar] [CrossRef]
  10. Denecke, Kerstin, Alaa Abd-Alrazaq, and Mowafa Househ. 2021. Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In Multiple Perspectives on Artificial Intelligence in Healthcare. Cham: Springer, pp. 115–28. [Google Scholar] [CrossRef]
  11. Filip, Stanislav, Dubrovina Nadiya, and Mykola Sidak. 2023. Organization and Financing of Healthcare in the Slovak Republic and Selected European Countries. In Developments in Information and Knowledge Management Systems for Business Applications. Studies in Systems, Decision and Control. Edited by Natalia Kryvinska, Michal Greguš and Solomiia Fedushko. Cham: Springer, vol. 466. [Google Scholar] [CrossRef]
  12. Funta, Rastislav, and Marián Horváth. 2024. Can the Platform Operator, Who Acts as a Provider on His Own Platform, Favor Himself over Third-Party Providers? Juridical Tribune Review of Comparative and International Law 14: 227–42. [Google Scholar] [CrossRef]
  13. Graham, Sarah A., Ellen E. Lee, Dilip V. Jeste, Ryan Van Patten, Elizabeth W. Twamley, Camille Nebeker, Yasunori Yamada, Ho-Cheol Kim, and Colin A. Depp. 2020. Artificial intelligence approaches to predicting and detecting cognitive decline in older adults: A conceptual review. Psychiatry Research 284: 112732. [Google Scholar] [CrossRef] [PubMed]
  14. Gunasekeran, Dinesh V., Rachel Marjorie Wei Wen Tseng, Yih-Chung Tham, and Tien Yin Wong. 2021. Applications of digital health for public health responses to COVID-19: A systematic scoping review of artificial intelligence, telehealth and related technologies. NPJ Digital Medicine 4: 40. [Google Scholar] [CrossRef] [PubMed]
  15. Husnain, Ali, Hafiz Khawar Hussain, Hafiz Muhammad Shahroz, Muhammad Ali, and Yawar Hayat. 2024. Advancements in Health through Artificial Intelligence and Machine Learning: A Focus on Brain Health. Revista Espanola de Documentacion Cientifica 18: 100–23. [Google Scholar]
  16. Jaliaawala, Muhammad Shoaib, and Rizwan Ahmed Khan. 2019. Can autism be catered with artificial intelligence-assisted intervention technology? A comprehensive survey. Artificial Intelligence Review 53: 1039–69. [Google Scholar] [CrossRef]
  17. Javed, Abdul Rehman, Ayesha Saadia, Huma Mughal, Thippa Reddy Gadekallu, Muhammad Rizwan, Praveen Kumar Reddy Maddikunta, Mufti Mahmud, Madhusanka Liyanage, and Amir Hussain. 2023. Artificial Intelligence for Cognitive Health Assessment: State-of-the-Art, Open Challenges and Future Directions. Cognitive Computation 15: 1767–812. [Google Scholar] [CrossRef]
  18. Kaššaj, Michal, and Tomáš Peráček. 2024. Synergies and Potential of Industry 4.0 and Automated Vehicles in Smart City Infrastructure. Applied Sciences 14: 3575. [Google Scholar] [CrossRef]
  19. Kaur, Manpreet, Simarjeet Kaur, Puneet Malhotra, and Chandra ShekharMukhopadhyay. 2024. Genetic diversity analysis by using Heterologous Microsatellite markers among cattle and buffalo breeds. Letters in Animal Biology 4: 23–28. [Google Scholar] [CrossRef]
  20. Khalid, Umamah Bint, Muddasar Naeem, Fabrizio Stasolla, Madiha Haider Syed, Musarat Abbas, and Antonio Coronato. 2024. Impact of AI-Powered Solutions in Rehabilitation Process: Recent Improvements and Future Trends. International Journal of General Medicine 17: 943–69. [Google Scholar] [CrossRef] [PubMed]
  21. Li, Renjie, Xinyi Wang, Katherine Lawler, Saurabh Garg, Quan Bai, and Jane Alty. 2022. Applications of artificial intelligence to aid early detection of dementia: A scoping review on current capabilities and future directions. Journal of Biomedical Informatics 127: 104030. [Google Scholar] [CrossRef] [PubMed]
  22. Mariš, Martin. 2020. Municipal changes in Slovakia. The evidence from spatial data. European Journal of Geography 11: 58–72. [Google Scholar] [CrossRef]
  23. Mentis, Alexios-Fotios A., Donghoon Lee, and Panos Roussos. 2023. Applications of artificial intelligence−machine learning for detection of stress: A critical overview. Molecular Psychiatry 29: 1882–94. [Google Scholar] [CrossRef] [PubMed]
  24. Milne-Ives, Madison, Caroline de Cock, Ernest Lim, Melissa Harper Shehadeh, Nick de Pennington, Guy Mole, Eduardo Normando, and Edward Meinert. 2020. The Effectiveness of Artificial Intelligence Conversational Agents in Health Care: Systematic Review. Journal of Medical Internet Research 22: e20346. [Google Scholar] [CrossRef] [PubMed]
  25. Mirbabaie, Milad, Stefan Stieglitz, and Nicolas R. J. Frick. 2021. Artificial intelligence in disease diagnostics: A critical review and classification on the current state of research guiding future direction. Health and Technology 11: 693–731. [Google Scholar] [CrossRef]
  26. Mishra, Gaurav. 2024. A Comprehensive Review of Smart Healthcare Systems: Architecture, Applications, Challenges, and Future Directions. International Journal of Innovative Research in Technology and Science 12: 210–18. Available online: https://ijirts.org/index.php/ijirts/article/view/32 (accessed on 8 August 2024).
  27. Nazar, Mobeen, Muhammad Mansoor Alam, Eiad Yafi, and Mazliham Su’ud. 2021. A Systematic Review of Human-Computer Interaction and Explainable Artificial Intelligence in Healthcare with Artificial Intelligence Techniques. IEEE Access 9: 153316–48. [Google Scholar] [CrossRef]
  28. Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366: 447–53. [Google Scholar] [CrossRef]
  29. Oladimeji, Kelechi Elizabeth, Athini Nyatela, Siphamandla Gumede, Depika Dwarka, and Samantha Tresha Lalla-Edward. 2023. Impact of Artificial Intelligence (AI) on Psychological and Mental Health Promotion: An Opinion Piece. New Voices in Psychology 13: 12. [Google Scholar] [CrossRef]
  30. Olawade, David B., Ojima Z. Wada, Aderonke Odetayo, Aanuoluwapo Clement David-Olawade, Fiyinfoluwa Asaolu, and Judith Eberhardt. 2024. Enhancing Mental Health with Artificial Intelligence: Current Trends and Future Prospects. Journal of Medicine, Surgery, and Public Health 3: 100099. [Google Scholar] [CrossRef]
  31. Omaghomi, Toritsemogba Tosanbami, Oluwafunmi Adijat Elufioye, Opeoluwa Akomolafe, Evangel Chinyere Anyanwu, and Ifeoma Pamela Odilibe. 2024. A comprehensive review of telemedicine technologies: Past, present, and future prospects. International Medical Science Research Journal 4: 183–93. [Google Scholar] [CrossRef]
  32. Peráček, Tomáš, and Michal Kaššaj. 2023. A Critical Analysis of the Rights and Obligations of the Manager of a Limited Liability Company: Managerial Legislative Basis. LAWS 12: 56. [Google Scholar] [CrossRef]
  33. Philippe, Tristan J., Naureen Sikder, Anna Jackson, Maya E. Koblanski, Eric Liow, Andreas Pilarinos, and Krisztina Vasarhelyi. 2021. Digital Health Interventions for Delivery of Mental Health Care: Systematic and Comprehensive Meta-Review (Preprint). JMIR Mental Health 9: e35159. [Google Scholar] [CrossRef] [PubMed]
  34. Rajkishan, S. S., A. Jiran Meitei, and Abha Singh. 2023. Role of AI/ML in the Study of Mental Health Problems of the Students: A Bibliometric Study. International Journal of System Assurance Engineering and Management. [Google Scholar] [CrossRef]
  35. Rogan, Jessica, Sandra Bucci, and Joseph Firth. 2024. Health Care Professionals’ Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review with Meta-Synthesis. JMIR Mental Health 11: e49577. [Google Scholar] [CrossRef] [PubMed]
  36. Shah, Varun. 2022. AI in Mental Health: Predictive Analytics and Intervention Strategies. Journal Environmental Sciences And Technology 1: 55–74. Available online: https://jest.com.pk/index.php/jest/article/view/72 (accessed on 10 August 2024).
  37. Shimada, Koki. 2023. The Role of Artificial Intelligence in Mental Health: A Review. Science Insights 43: 1119–27. [Google Scholar] [CrossRef]
  38. Srebalová, Mária, František Vojtech, Bernard Pekár, Beáta Mikušová-Mericková, and Matej Horvát. 2018. Restriction on the re-export of medicinal products and the supervision of compliance with it by public administration bodies. European Pharmaceutical Journal 65: 24–30. [Google Scholar] [CrossRef]
  39. Tache, Popa Cristina Elena, and Catalin Silviu Sararu. 2023. New transdisciplinary directions in international law? Lex Humana 15: 86–109. [Google Scholar]
  40. Thieme, Anja, Danielle Belgrave, and Gavin Doherty. 2020. Machine Learning in Mental Health. ACM Transactions on Computer-Human Interaction 27: 1–53. [Google Scholar] [CrossRef]
  41. Tornero-Costa, Roberto, Antonio Martinez-Millana, Natasha Azzopardi-Muscat, Ledia Lazeri, Vicente Traver, and David Novillo-Ortiz. 2023. Methodological and Quality Flaws in the Use of Artificial Intelligence in Mental Health Research: Systematic Review. JMIR Mental Health 10: e42045. [Google Scholar] [CrossRef]
  42. Toro-Tobon, David, Ricardo Loor-Torres, Mayra Duran, Jungwei W. Fan, Naykky Singh Ospina, Yonghui Wu, and Juan P. Brito. 2023. Artificial Intelligence in Thyroidology: A Narrative Review of the Current Applications, Associated Challenges, and Future Directions. Thyroid 33: 903–17. [Google Scholar] [CrossRef] [PubMed]
  43. Vatansever, Sezen, Avner Schlessinger, Daniel Wacker, H. Ümit Kaniskan, Jian Jin, Ming-Ming Zhou, and Bin Zhang. 2020. Artificial intelligence and machine learning-aided drug discovery in central nervous system diseases: State-of-the-arts and future directions. Medicinal Research Reviews 41: 1427–73. [Google Scholar] [CrossRef] [PubMed]
  44. Verma, Shradha, Tripti Goel, Mohammad Sayed, Weiping Ding, Rahul Sharma, and Rajendiran Murugan. 2023. Machine learning techniques for the Schizophrenia diagnosis: A comprehensive review and future research directions. Journal of Ambient Intelligence and Humanized Computing 14: 4795–807. [Google Scholar] [CrossRef]
  45. Xu, Lu, Leslie Sanders, Kay Li, and James C. L. Chow. 2021. Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning (Preprint). JMIR Cancer 7: e27850. [Google Scholar] [CrossRef] [PubMed]
  46. Younis, Hussain A., Taiseer Abdalla Elfadil Eisa, Maged Nasser, Thaeer Mueen Sahib, Ameen A. Noor, Osamah Mohammed Alyasiri, Sani Salisu, Israa M. Hayder, and Hameed AbdulKareem Younis. 2024. A Systematic Review and Meta-Analysis of Artificial Intelligence Tools in Medicine and Healthcare: Applications, Considerations, Limitations, Motivation and Challenges. Diagnostics 14: 109. [Google Scholar] [CrossRef] [PubMed]
  47. Zahlan, Ahmed, Ravi Prakash Ranjan, and David Hayes. 2023. Artificial intelligence innovation in healthcare: Literature review, exploratory analysis, and future research. Technology in Society 74: 102321. [Google Scholar] [CrossRef]
Table 1. AI applications in mental health: enhancing care through technology.
Table 1. AI applications in mental health: enhancing care through technology.
AI ApplicationDescriptionExamples of UseImpact on Mental Health
DiagnosticsAI algorithms analyzing data to identify mental health conditions.Use of AI to detect depression from social media activity.Improved accuracy in early diagnosis of conditions like depression.
Treatment PlanningAI systems generating personalized treatment plans based on individual patient data.AI-driven suggestions for medication or therapy adjustments.Enhanced personalization leading to more effective treatment outcomes.
Virtual TherapyAI-powered tools like chatbots providing therapy sessions or support.AI chatbots delivering cognitive behavioral therapy (CBT) online.Increased accessibility to mental health support, reducing barriers.
MonitoringContinuous monitoring of mental health through AI analysis of data from wearables or other devices.Wearables tracking sleep patterns to predict anxiety levels.Continuous support and intervention, preventing the escalation of issues.
Predictive AnalyticsAI predicting mental health crises or outcomes based on historical and real-time data.Predicting suicide risk by analyzing patient history and behavior.Proactive intervention, potentially saving lives.
Table 2. Applications and their key findings.
Table 2. Applications and their key findings.
ApplicationSourceKey Findings
AI-Powered Diagnostic ToolsLi et al. (2022)AI aids in the early detection of dementia, improving diagnostic accuracy.
AI-Enhanced TelehealthGunasekeran et al. (2021)AI-integrated telehealth platforms are crucial for public health responses.
AI in Autism InterventionsJaliaawala and Khan (2019)AI helps in creating personalized therapy plans for individuals with autism.
AI in Stress DetectionMentis et al. (2023)AI effectively detects stress through wearables and predictive models.
AI in RehabilitationKhalid et al. (2024)AI supports personalized rehabilitation plans, leading to better recovery outcomes.
Table 3. Fields of use of AI in mental health care.
Table 3. Fields of use of AI in mental health care.
Field of UseNumber of StudiesPercentage of Total Studies
Predictive analytics2530%
Diagnostic tools1822%
Therapeutic interventions1518%
Monitoring and surveillance1215%
Data privacy and security810%
Other75%
Total85100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hoose, S.; Králiková, K. Artificial Intelligence in Mental Health Care: Management Implications, Ethical Challenges, and Policy Considerations. Adm. Sci. 2024, 14, 227. https://doi.org/10.3390/admsci14090227

AMA Style

Hoose S, Králiková K. Artificial Intelligence in Mental Health Care: Management Implications, Ethical Challenges, and Policy Considerations. Administrative Sciences. 2024; 14(9):227. https://doi.org/10.3390/admsci14090227

Chicago/Turabian Style

Hoose, Stephan, and Kristína Králiková. 2024. "Artificial Intelligence in Mental Health Care: Management Implications, Ethical Challenges, and Policy Considerations" Administrative Sciences 14, no. 9: 227. https://doi.org/10.3390/admsci14090227

APA Style

Hoose, S., & Králiková, K. (2024). Artificial Intelligence in Mental Health Care: Management Implications, Ethical Challenges, and Policy Considerations. Administrative Sciences, 14(9), 227. https://doi.org/10.3390/admsci14090227

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop