One of the striking aspects of the current state of global health is the ever-rising need for therapeutic and mental health care. For instance, the number of sick leaves due to mental illness in Germany has more than doubled since 1997 [1
]. While this crisis is global in nature, its effects are more apparent in countries with weaker health care systems. For example, a whopping 77% of global mental-health related suicides occur in low- or middle-income countries [2
]. Despite the increasing need for mental health care, the number of healthcare workers remains incredibly low in these countries. It is estimated that the supply of healthcare workers in low-income countries is as low as 2 per 100,000 inhabitants while high-income countries have an average of 70 health care workers per 100,000 habitants [3
]. The COVID-19 pandemic further highlighted the urgent need for additional investments in the mental health care sector. In the first year of the pandemic alone, the emergency calls related to mental health conditions, suicide attempts, drug overdose, or child abuse increased significantly in the United States [4
]. The negative effects of long-lasting lockdowns can also be observed globally, with people often suffering from loneliness or depression [2
Given the increasing need for mental health care workers coupled with less than adequate supply in the short- to mid-term time frame, alternatives to mental health care need to be considered to prevent an all-out mental health crisis. Empathetic Artificial Intelligence (AI) agents or Conversational Agents (CAs) [5
] have emerged in the recent past as a viable alternative because they are accessible anywhere and available at any time to provide counseling and deliver therapeutic interventions [3
]. CAs may help people cope with mental health conditions such as depressive anxiety and loneliness to enhance mental health and well-being [6
While the idea of CAs mitigating the mental health crisis is an appealing one, it would be unwise to implement it on a large scale without evaluating the impact of such CA based therapy on individuals and societies. This conversation regarding the benefits and dark sides of AI is not new. While the concept of CAs for mental health has been popularized recently, the attempts to create “therapy bots” have been in progress for decades [7
]. It began in 1966, when computer scientist Joseph Weizenbaum created an empathetic machine to simulate a psychotherapist. The technology behind the “empathetic” machine was a simple computer program called ELIZA that was able to communicate with humans via natural language [8
]. He witnessed how his participants opened their hearts to a computer program and was shocked by their emotional attachment to the program. This experience turned him into an ardent critic of his own creation [9
]. While the people interacting with the machine ascribed human characteristics to it and other psychiatrists saw ELIZA’s potential as a computer-based therapy and as a “form of psychological treatment” [10
] (p. 305), Weizenbaum himself had misgivings about this mode of therapy. He wanted to “rob ELIZA of the aura of magic to which its application to psychological subject matter has to some extent contributed” [8
] (p. 43).
Fast forward to almost 6 decades later, and the debate still rages on. While there are an increasing number of therapy bots in the market, critics continue to advocate for the need to reassess the potential “dark sides” of AI and the ethical responsibilities of developers and designers [11
]. We now live in a world where ELIZA’s descendants have become an integral part of people’s lives and have names such as Woebot and Replika. They have evolved into “digital creatures that express human-like feelings” [11
] (p. 1) and have become increasingly capable of handling highly complex tasks with human qualities such as a higher autonomy of decision-making [12
The domain of mental health care is a highly patient-centered sphere, where a successful conversation is dependent on patients’ individual dynamic behavior and the therapist’s ability to adapt to the patient’s specific needs in order to form a therapeutic relationship [14
]. As the capabilities of CAs continue to evolve, they are able to personalize the mental health care [14
] by capturing their individual dynamic behavior and adapting to the users’ specific personalities. One of the goals of this paper is to examine such a specific type of CA—a Personality-Adaptive Conversational Agent (PACA). PACA represents a novel way to address a serious prevalent problem (mental health issues). The concept of a PACA is an innovative solution that uses AI to demonstrate progress in the field of healthcare [16
]. PACAs automatically infer users’ personality traits and adapt to their personality by using language that is specific to a particular personality dimension (e.g., extraversion, agreeableness) to enhance dialogue quality [19
]. Thus, PACAs establish rapport with the patient to enhance interaction quality and mental health support.
While we celebrate the progress of AI assisted mental health care agents, it is also important to highlight the caveats of creating and perfecting human-like CAs with simulated feelings, without considering long-term consequences for human beings such as deep, emotional attachments as was demonstrated by ELIZA [8
]. Accordingly, determining the degree of likeness to humans [11
] poses a challenge for CA designers. Designing CAs that are capable of expressing human-like characteristics such as a personality and yet withholding from them the expressions of feelings and empathy because they “are the very substance of our humanness” [11
] (p. 1) is a matter of delicate balance. This dilemma is brought to sharper focus with the increasingly acute shortage of mental health workforce globally making empathetic CAs a promising source of support [2
In light of the caveats that need to be considered with human-like CAs that do not have “real” feelings, it is critical to take ethical issues (i.e., trust, privacy, support) into consideration when designing PACAs. In addition, it is necessary to identify the apprehensions people have against the use of PACAs and furthermore, how the design of PACAs can support overcoming these caveats. Consequently, in this paper we focus on answering the following research questions:
RQ1: What are benefits of PACAs in mental health care?
RQ2: What caveats do we need to be aware of regarding the usage of PACAs in mental health care?
RQ3: Which requirements could be derived from the identified caveats and what are the solutions to address the requirements?
To address these research questions, we followed an explorative research approach and conducted a qualitative study [21
]. The results of this study contribute to understanding of PACAs’ overlooked benefits and the emerging caveats in mental health care, particularly the potential positive and negative aspects of PACAs. Furthermore, we specifically focus on the caveats and recommend solutions to address them.
The remainder of this paper is structured as follows: In the theoretical background, we first give a brief overview of selected CAs used in a mental health care context and elaborate on the concept and functionalities of a PACA. We then explain our method and how we simulated a conversation between a PACA therapist and a human patient to conduct our qualitative study. We then present the results and discuss our RQ.
4. Results: Benefits and Caveats of PACAs in Mental Health Care
Overall, we coded three categories with seven subcategories and assigned 410 text segments to the code system. The first category PACA Support
is divided into three subcategories: Merits
elaborate on advantages of the PACA support in mental health care, Demerits
illustrate the concerns the participants had with a PACA in this specific context, and Limited Merits
, includes all the statements of those respondents who found the support of a PACA only partially helpful. The second category PACA Trust
includes statements about the extent to which participants would trust a PACA and whether they would build a relationship over a longer period with the CA. The two codes that were derived for this category were Trustworthy
. The third and final category, PACA Privacy
, was specifically about data privacy and whether the participants would allow access to their data in order for the CA to be personality adaptive. Its two subcategories are called Uncritical
. Figure 1
provides an overview of the categories and their subcategories.
4.1. PACA Support
PACA Support contains the most responses, as this category consists of three specific questions altogether (see Table 1
). Support or social support includes mechanisms and activities involving interpersonal relationships to protect and help people in their daily lives [6
]. One of the most mentioned demerits by the participants is that the PACA has limited skills and that this lack of ability can lead to wrong judgments. For example, one user stated: “Potential misinterpretations from the PACA of what I said could lead to more negative thoughts and make things worse”. Similarly, two other participants mention: “I would be afraid that not all of the details will be understood correctly, and maybe ‘wrong’ judgement will come up” and “[…] a PACA is not human and cannot fully understand the full range of issues a person is dealing with”. The non-humanness of the PACA is an issue that was brought up by the participants on multiple occasions. They felt that a human therapist was necessary for mental healthcare and could not imagine interacting with a PACA. Many participants did not have a specific reason not to choose PACA or they believed that overall people are better at helping than PACAs. They expressed sentiments such as “No, I most likely will always choose a real human therapist” or “People really need an actual human to human interaction in life”.
Another demerit that was mentioned several times was the mental health care context in which the CA was used for. Participants indicated that a PACA might not be supportive when it specifically comes to severe cases of complex mental health issues. They indicated that it is “probably too hard to solve for today’s AI solutions” in this context. One person elaborated that “in mental health, they [PACAs] could do serious damage just by not understanding and addressing user needs”. According to the participants’ responses, one of the main reasons for such unforeseeable outcomes was “negative or destructive behavior” that a PACA can evoke in patients. Specifically, an “aggressive or dominant behavior” by the PACA might lead to the patient “completely closing off and losing hope”. In contrast, other responses mentioned desocialization as a caveat, noting that patients can become “dependent on the PACA and start to distance from reality and real people”. Other demerits stated by the participants were that communicating with a PACA would be “creepy” and “odd” for them.
One of the most frequently stated merits mentioned by the participants was the accessibility/availability of the PACA. While one participant thought the PACA “provides an escape from challenging emotional situations whenever necessary. […] Raffi can be available when the therapist is not”, another subject stated that “it can be helpful because it functions like a relief cushion for the patient while they wait for a therapist assigned to them. They feel understood and listened to, no matter how trivial the conversation with the PACA may be”. Being listened to is another merit that was brought up several times. The respondents indicated the PACA would be like a “penpal” or “like the best friend you have at home”. Further benefits that were listed several times include PACA’s ability to put patients at ease (“it can give out apps to help soothe the mind”), to memorize past conversations (“[…] it does not forget what has already been discussed and is not annoyed when the same topic comes up again and again”) and to create a personalized experience (“[…] it makes you feel like there is some more meaning to sharing it with something that can at least pretend to care. It can help personalize your experience. This makes people feel worthy”).
A large proportion of the participants stated that a PACA might be helpful for mental health therapy by motivating and/or advising the patients, specifically by being a “helpful support in everyday life”. One participant further pointed out that if “developed carefully, and deployed and monitored safely, PACAs have enormous potential for helping bridge the gap between patients’ needs and the mental health system’s available qualified staff and resources”. Some respondents noted that “[…] if it feels genuine with good and varied tips/tasks/advice” and “[…] if the AI is so genuine that it’s hard to distinguish from a human” they can imagine using the PACA as a support system for mental health issues. While some participants stated that they do not fear that PACA could become manipulative or pose a danger, other respondents wrote that they only partially believe in the merits of the PACA. They specifically noted that a PACA can be considered as a “short-term supporting system” that “prepares the patients mentally” but that human therapists should “regularly intervene and supervise”. In fact, the suggestion that PACA should be monitored by human therapists was mentioned by majority of participants. Another limited merit that was brought up several times by the respondents is that the benefit of using a PACA largely depends on how the PACA is designed/skilled. They made comments such as “communication style, body language and tone of voice is very important and powerful elements of communication and have a great impact on others”.
4.2. PACA Trust
Another important factor identified in the participants’ responses was trust. Trust is commonly understood as the “willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” [43
] (p. 712). Participants mentioned that “trust is very important for all the time” and an important precondition for the development and maintenance of a long-term relationship with the PACA. However, the participants had different views on whether a PACA can build up enough trust to establish a long-term relationship, even after seeing a “real” human therapist.
On one hand, some participants argued that they would stop using the PACA when a human therapist was available. For example, one participant stated, “if I were seeing a real human therapist, I would not see the need to continue chatting with the PACA”. In many cases, concerns were connected to the compatibility of two therapies at a time. That is, if the advice of the PACA were not to be aligned with the human therapist, it could cause problems for the patient. The assumption, that a human therapist always has the advantage over a PACA is leading to the decision of many participants to decide against a long-term relationship with the PACA. One respondent would see human supervision as an obligatory requirement to use a PACA over a longer period: “[…] not as long as there is no human supervision behind Raffi. Even human therapists in training have mandatory supervision”.
Trust issues were also associated with the difference between humans and AI in that “a PACA is not human and cannot fully understand the full range of issues a person is dealing with”. Only when it would be “hard to distinguish from a human […] or gives such good advice”, it could be a partner in a long-term relationship. For example, one participant expected the PACA to get “rid of any flaws and be very helpful in my everyday life for me to talk to it like a spouse I married”. Finally, some participants expressed their general privacy concerns which would hinder any form of a long-term relationship (“I don’t trust any listening device, the privacy risks are simply too great”). Especially the storage of data and the implementation of the software would need to be transparent and understandable. For example, participants stated that “I could see myself building trust only if the program continued to be trustworthy to use”. Some replies connect the long-term abilities of a PACA with the quality level of the software, stating for example: “Only when the AI is so genuine that it’s hard to distinguish from a human I would have contact”. Accordingly, there were misgivings about the current state of AI solutions. The participants seem to believe that a human-AI relationship would only make sense if the quality level of the AI advice offered surplus benefits on top of the human therapist.
In terms of creating a bond with humans, many participants stated that they would highly appreciate abilities to memorize past conversations: “Listen, remember, build suggestions and recommendations on previous conversations not on general conclusion.” Many participants seemed to define this characteristic as that of a “good listener”. This desire to have an active listener in a therapy bot underlines participants’ wish for individualization of therapy. This aspect is strongly correlated with the learning abilities of the PACA. To build a bond, PACA should be able to learn about the patient over time and adapt to him/her accordingly. Statements included for example: “[…] the PACA [is] required to […] build on the previous experiences has with the person. It has to show an understanding for the situation of that person and be able to react properly.” The answer also shows the importance of empathy in the conversation with a PACA. The ability to fully understand the patients’ situation is only possible by learning over time and developing contextual ‘thinking’. Adequate reactions of the PACA are strongly connected to its ability to assess situations or feelings the patient might have experienced.
With regard to trust building with a PACA, answers included transparency about the software’s limitations. For some respondents, this would lower the concerns about security significantly. Suggestions also included assessing and evaluating the software regularly. This could include external assessments by experts or authorities as well as user feedback, that would help to improve the software.
Other than these caveats, participants were open towards establishing trust and maintaining a relationship with the PACA over a longer period. They appeared to feel “comfortable talking to it even after seeing a human therapist”. One of the reasons stated was that they would “find it easier instead of constantly calling the therapist”, particularly because they believe that some of their issues as “just too small to bother someone with”. These participants could not only imagine themselves trusting the PACA because it can “give something back and seems to care”, but also because it would be “like a friend you have at home”. Moreover, participants stated that “the more you interact with it the easier it could be” to build trust and maintain a relationship with the PACA.
4.3. PACA Privacy
Category of PACA Privacy captured all concerns of the participants involving the necessity of a PACA to gather and analyze (sensitive) data to assess the personality of a user and adapt accordingly. Privacy refers to the non-public area in which a person pursues the free development of his or her personality undisturbed by external influences. Concerns arise when such private information could enter the public domain without authorization [44
]. The participants addressed aspects that they considered to be particularly critical and the ones that they did not consider to be critical. The most important critical aspects were the potential invasion of privacy, as participants did not feel comfortable sharing personal information and “feel a little bit under a microscope”. One participant stated that it “sounds alarming to allow a PACA access to your personal data and communications”, while another participant said that as a “user you always need to be aware of what the information could be used for and vulnerabilities always exist”. Another participant stated: “No, that’s invasion of my privacy. I do not feel comfortable with allowing access to my personal messages and phone history. “Caveats against sharing personal information seem to be connected to data privacy issues on social media indicating that “the overall trust in todays messenger systems has suffered a lot over the past years due to several events.” It appears that the underlying problem is the loss of control over personal data. Even if the program itself is trustworthy, the potential risk of hacking and exposing vulnerable data to criminals is reason enough not to share private chat history with a PACA.
Apart from being unwilling to give the PACA access to private data, responses also expressed concerns about the practicality from a legal perspective. Specifically, one respondent mentioned the US HIPAA laws, which regulate and provide guidance for the proper uses and disclosures of private health information. It further defines how to secure the data and what to do in case of a breach of rules [45
Concerning data handling, one participant suggested conducting personality tests with the patient instead of requiring full access to social media accounts. Generally, data is not supposed to be stored longer than needed. The program should also be transparent about how the data is being deleted.
On the bright side, the participants seemed to agree that they need to provide data in order for the PACA to work properly. “Yes, I would allow it to access my data. I would be willing to trust it if it could help me in the long run” said one participant. To “get the best results”, the participants agreed on providing data to the PACA so that it can adapt to a user and “help my therapy in a positive way”. Even though many participants had concerns about the use of sensitive data, they appeared to be willing to share their data under certain conditions to take advantage of the PACA. It seemed to be a majority opinion that data should be sent and stored in encrypted form and not passed on to third parties. They further agreed that only the critical information should be used and the data should be deleted when it was no longer needed. If these specific conditions were met and were explicitly communicated by the PACA, a disclosure of private information was acceptable by the participants. The perceived benefit from the use of the personal information was also to be communicated by the PACA and be visible to the user. Therefore, the design of the PACA and its handling of personal data is critical. Table 2
summarizes the results for all categories.
5. Addressing the Caveats of PACAs in Mental Health Care
The results of the survey show, that a significant number of caveats are associated with the PACAs’ abilities to substitute a human and mirror the skill set of a real therapist. Comments included doubts about the PACA being able to detect severe mental illness, being able to understand the full range of problems, or be able to think contextually. While certain functionalities suggested could guarantee that the PACA is not used for cases beyond its scope, others might be hard to accomplish with the current technological know-how. Even though AI is being constantly improved and the effectiveness of CAs has been evidenced in multiple types of research, highly complex conversation or even therapy on the level of a human therapist are not likely to be realized in the near future [7
In contrast to its technological design, privacy concerns can be addressed with the current state of technology. Major caveats concerning storage and processing of data can potentially be addressed by giving data privacy a high priority during the development phase of a PACA. Encryption, decentralized data structures, and protection by multiple authentication methods are valuable suggestions, that would also help to increase trust and lower concerns. Measures to secure sensitive data, such as health-related information in cloud environments are already existing and could be implemented [46
]. Limitations may exist in the efforts to run software completely on the local devices. Especially when it comes to privacy-related caveats, communication and transparency are very important as trust is based on data security.
Other than data privacy-based trust, the results also underline the importance of the PACAs communication style. Generally, a friendly, cheerful, but confident appearance was appreciated by the respondents. This observation aligns with the latest research on the personalities of CAs in mental health care [7
]. Despite the fact, that the survey data provides insights that help to answer the initial research questions, there are further limitations to be considered. The participants of the survey were randomly chosen. Even though the survey is balanced from a demographic perspective, most of the respondents were not familiar with the concepts of PACAs and relied on the impression given in the introduction of the survey. The expertise of the users regarding how to design PACA and regarding the technological options that exist were also limited. Nevertheless, the results are sufficient to derive requirements to answer the research questions posed in this paper.
One of the major concerns mentioned in the survey was the overall privacy of data. Accordingly, the security of the software system should have the highest priority. This includes the storage of data and processing of data that is necessary for the functionality of PACAs. To prevent misuse of highly sensitive data by external parties, adequate protection against attacks from hackers is required. To assure its legal correctness, usage of chat history and social media insights need to be checked or certified by the authorities. Data privacy laws are highly specific and can vary significantly between the different states [48
]. Overall, the need for the users’ data must be kept as low as possible.
Regarding the technological abilities of the software, it is important that the program functions flawlessly as mentally ill patients are highly sensitive towards mistreatment or undetected imminent safety risks [3
]. Certification by mental health care experts or mental health authorities must be undertaken to ensure high quality care. However, it is necessary that companies offering PACAs for mental health care are transparent about their expectations, limitations, and field of use before granting full access to the PACAs services. Transparency was also considered as one of the main factors for building trust in the program. This included its functionality as well as privacy of data.
Further requirements for PACAs can be identified in the way they express themselves. In terms of verbal language, the wish for a friendly, considerate, and cheerful, but also confident and polite was mentioned several times. Balancing between cheerfulness and humor and confidence without pressuring a patient is one of the most challenging tasks for future PACAs. Body language, such as gentle, human-like gestures are expected to support the impression of the PACA.
Regarding verbal and para-verbal language cues, PACA should be required to learn about a patient the same way a human therapist would do to develop a contextual form of understanding and built long-term bond.
Finally, the PACA should be compatible with real therapy. Most participants do not see PACAs today as a substitution for therapists, which raises the question of how both can be combined. Anyways, it must be assured, that the PACA is not promoting counterproductive advice and is the best case even involved in the therapy.
5.2. Solution Approaches
The following suggestions for the design of future PACAs were derived from the survey data and are aimed at reducing people’s caveats for PACAs in mental health care. As overall privacy and data security were some of the main identified requirements to overcome these concerns, several actions could be considered for the future design of PACAs. To reduce the risk of misuse or leakage of patients’ sensitive health data, it is recommended that a decentralized data structure is created for data storage. Wherever possible, data should be stored on the device of the user, and be protected behind its firewalls. Even though this reduces the risk of digital theft of data, physical theft or loss of the phone/computer needs to be considered as well [49
]. It may be helpful to include emergency software functions that assure the deletion of the on-device data in case of loss. Similar functions are available on latest smartphone, such as Apple’s iOS devices. Additionally, local data could be secured using multiple authentication methods. This could include, for example, one on time passcode (OTP) or facial recognition in addition to traditional passwords.
While the short and even long-term storage of data on the device is relatively easy to achieve, the on-device processing of data is significantly harder to accomplish. Comprehensive AI solutions, which are necessary for the realization of a PACA, usually rely on cloud computing [50
]. However, the software could be run on the devices whenever the computational power is sufficient and rely on cloud computing only when strictly necessary. Further options include end-to-end encryption of data or anonymized processing of data. This would assure, that the processed information could not be tracked back to the individual user. In any case, it is recommended to run the service on infrastructure providers who specialize in the protection of sensitive data. To guarantee compliance with local data security regulations, systems should be checked in cooperation with authorities or experts.
Regarding the concerns about the conversational data, participants also expressed doubts about the necessity of granting full access to social media accounts and chat history. To reduce the amount of data that is used to adapt the PACA to the user’s personality, it might be helpful to let the users decide which data they want to share. In addition, personality tests or pre-therapy conversations can be conducted with the PACA to gain information about the patients’ personality traits. The usage of personality tests could also be helpful to identify severe mental illnesses or behavior that potentially exceeds the capabilities of the PACA. It would address the concern of many respondents, that PACA could possibly give inadequate advice resulting in counterproductive effects for the patients’ mental health state. In addition, functions could be implemented that help people to seek proper help from therapists [3
]. As many respondents see PACAs as an addition to real therapy or as a bridging function, it is recommended to develop approaches to implement PACAs into the later therapy. This could include, for example, possibilities to monitor the patients’ conversations with the PACA by the therapist or reflect them during the therapy sessions. During the development process of a PACA, independent psychology experts could also examine its functionality, quality, and compatibility with therapy. Regular checks could include testing its ability to use verbal, para-verbal, and body language adequately and therefore meet patients’ needs.
Finally, transparency and communication play key roles in the use of PACAs in mental health care. Especially transparency of the data security concepts and functionalities could help to reduce users’ concerns. Nevertheless, if the users do not know where their data is being stored and processed, the caveats remain despite highly competitive security concepts. Transparent communication can therefore be seen as a tool to reduce people’s concerns. It would further help to clearly address the abilities, limits, and scope of PACAs to create realistic expectations and prevent misuse or disappointment. Ultimately, this is the only possibility to create long-term trust in new solutions as PACAs.
Reflecting on the results of this study, it is important to look at how the initial motivation—globally rising numbers of mental illnesses, as well as the effects of the COVID-19 pandemic—strengthen the need for additional health care workers worldwide. In low-come countries without fully functional health care systems as in high-income countries, the situation is even worse. The number of approximately 2 health care workers per 100,000 inhabitants highlights this urgency. Against this background, CAs offer the potentials to cover or reduce the lack of health care workers. By qualitatively surveying a total of 60 people, we were able to identify potential benefits and caveats, which we then translated into general requirements and proposed solutions. The results of our study shed light on both the negative and positive aspects of PACAs and contribute to theory and practice.
6.1. Theoretical Contributions
As expected, most participants were more critical as opposed to not being critical towards their sensitive data. Although several participants stated that they could imagine building a trustworthy relationship with the PACA, it should not be ignored that almost as many indicated they did not find the PACA from the example trustworthy. This suggests that people perceive CAs differently and have varied preferences concerning the communication style of a CA, highlighting once again the individual differences of people. Corresponding to findings from previous studies [14
] a PACA may offer helpful support to people in need, put them at ease, and can be a friend who listens when human therapists are not available—specifically in light of the pandemic, this can be considered as an enormous benefit. However, in line with existing research [3
], PACAs may also create an unintended (emotional) dependency which, for example, can lead to further reduction in socializing. If these issues are not addressed properly, Weizenbaum’s caveat of a “Nightmare Computer” could indeed come true. In the 1960′s, AI capabilities were limited, and much like her namesake Eliza Doolittle from the play “Pygmalion” [51
], Weizenbaum’s ELIZA had no understanding of the actual conversation but merely simulated a discourse with intelligent phrasing. Yet, ELIZA simulated her psychotherapeutic conversations so convincingly that people became deeply and emotionally involved with the program. This demonstrates how “simple” verbal communication can be used or taken advantage of to achieve positive or negative outcomes. With today’s powerful AI capabilities, the current critical voices regarding AI ethics are therefore very much justified. Focusing on the design of CAs without carefully considering any potential consequences for people’s well-being can backfire quickly. Although humans do know from a philosophical perspective that machines are not capable of expressing “real” feelings, they still respond to them emotionally as if they are. A CA’s poor communication skills and specifically that of a PACA that can personalize to the user’s communication preference on a high level, could aggravate negative health outcomes instead of improving them.
Our findings contribute to the strong and nascent research stream on AI in the sustainability field. In this context, AI has been studied primarily for ethical aspects [12
], for possible sustainable business practices [52
], or specifically how AI in the context of the sustainable development goals [53
]. In particular, the factors of sociotechnical systems that are unsustainable were highlighted. With these findings in mind, we place our research right within this research stream and show how AI can be used in the context of healthcare and specifically mental health care.
6.2. Practical Implications
Based on the insights from our participants, requirements were derived that would need to be fulfilled to reduce the expressed concerns of the users. Priorities for derivable requirements include the assurance of absolute privacy of data via modern security concepts and flawlessly functioning software. Best case, both points should be confirmed by external authorities or experts. The survey data suggests that a mere fulfillment of requirements concerning data security or software development is not sufficient to reduce concerns. Even though they need to be fulfilled, the communication and transparency concerning data privacy as well as functionality and limitations of the software are equally important to be addressed. In addition, the analysis of the survey data raises questions of ethical concerns and whether they are possible to be solved in the near future. Other than the aforementioned privacy concerns and the risk of harm, the general risk of bias in machine learning algorithms also exists for PACAs. The data base that is used for training the AI is strongly correlated with its later focus on certain characteristics of the data, such as gender or ethnical origin [3
]. It is therefore essential not to train algorithms that could lead to possible biases and thus reinforce discrimination or exclusion.
For implementing a PACA and offering such a service, it is therefore important to follow a set of guidelines to create a safe and benevolent service. These guidelines result from the discussed requirements and solution approaches: (1) Guarantee a high level of data security through current standards, (2) high transparency of the interaction and capabilities of a PACA, and (3) development together with experts (therapists and psychologists) and without bias.
Furthermore, it is important not to advertise such a service as a substitute for therapy, but as a first point of contact or an accompaniment to therapy. The marketing of a PACA in the field of mental health care is therefore just as important as its design and implementation. Furthermore, we advise companies to consider how PACAs can be implemented and offered. We also advise governments and non-profit organizations to consider how AI can be used to address the major problem of mental health issues. Organizations such as the WHO can learn from our findings to launch initiatives to promote AI and PACAs and to elaborate support and funding possibilities. In this case, it is necessary not only to offer commercial systems (such as Woebot or Replika), which cannot necessarily be used in low-income regions of the world, but also to implement and offer free to use and non-profit systems (PACAs in particular).