1. Introduction
Artificial intelligence (AI) technologies have advanced dramatically over the past years. The excitement has been evident, and the possible uses appear nearly endless. Like any technology, however, AI can be used for good, or for ill. If the use of AI, and large language models (LLMs) in particular, is to contribute to our society, rather than hinder it, we should continually question whether specific uses of these technologies are in fact conducive to human flourishing. This relates to whether AI products are designed to be thus conducive, and whether, as individuals and communities, we are shaping our practices to engage with those applications in ways that are helpful, and avoiding those that are not. The objective of the present paper is to identify, at the level of the product developer and of the individual user, sets of considerations that shape whether AI technologies promote or impede flourishing. We believe this is a good place to begin, as such individual decision-making is within each person’s control, and each person thus has some capacity to shape their own flourishing and that of others. We acknowledge that there are other considerations relevant to the effects of AI on flourishing at the societal level, including matters related to security, international relations, business competitiveness, job insecurity, justice, the environment, and others. Such broader societal and policy considerations are also central when considering the design and use of AI technologies (e.g., [
1,
2,
3,
4,
5,
6]). However, even individual-level decision-making will go on to shape societal- and policy-level considerations, and regardless of policy, individuals will still have some autonomy and control over their own decisions on use of these technologies. The present paper will thus only briefly engage with questions of the interface between policy and individuals, and concentrate instead on the interface between product developers and individual users. While our paper may be seen by some as cautionary, we believe that such caution is warranted in part because, as will be evident from the discussion that follows, many of the potential benefits of AI technologies are instrumental, whereas many of the potential threats concern intrinsic aspects of our flourishing as human persons.
Our focus will also be on what, as we see it, we ought to do, rather than on regulatory frameworks. While we believe some of the considerations we put forward may help guide future development of regulatory frameworks, we will here consider how, as individuals and as communities, we might design and engage with these technologies in a manner that is conducive to human flourishing.
The paper offers comment on five sets of considerations concerning AI and flourishing to help guide discussions around, development of, and engagement with, AI technologies, with the hope of better orienting AI towards the promotion of individual and communal flourishing. Two sets of considerations pertain to AI product developers concerning (i) the nature of the output provided by large language models (LLMs) and (ii) the specific design and packaging of particular AI products. The other three sets of considerations pertain to the users of AI products concerning (iii) decisions about the extent and nature of use, (iv) the effects of AI use on human knowledge, and (v) the effects of AI use on human persons and communities that extend beyond matters of knowledge. Given this focus on user decision-making, and also given the extent of individual user engagement in LLM products (i.e., chatbots), our discussion and examples will mostly pertain to LLMs.
The working definition of flourishing used in this paper is “the relative attainment of a state in which all aspects of a person’s life are good, including the contexts in which that person lives” [
7,
8]. Understood thus, flourishing is all-inclusive and multi-dimensional. We may be flourishing in some ways, and not in others. Flourishing is also an ideal; it is not something we ever perfectly attain in this life; there is always room for improvement. Flourishing also includes the contexts, communities, and natural and social environments in which we live, both with regard to those contexts hopefully being conducive to our own well-being, and because good communities around us are part of what makes life good.
In prior work, we have operationalized the assessment of individual flourishing around six key domains of human life: happiness, health, meaning, character, relationships, and financial security [
7,
9]. The first five of these domains constitute important ends in their own right, and the sixth, financial security, is a critical means to help attain those ends. A brief 12-item measure (see
Table 1), with two questions in each of these six domains has been employed in numerous countries across the globe [
7,
9]. Such a conceptualization is in no way exhaustive of the dimensions of flourishing, but these domains appear to be nearly universally valued across persons and cultures and are arguably a good place to begin with regard to consensus around flourishing. It can be helpful to evaluate, for example, how a specific AI technology—and our engagement with it—is conducive (or not) to our happiness, health, meaning, character, relationships, and financial security.
There are numerous other frameworks for and measures of flourishing that might also be considered [
10,
11,
12,
13,
14]. Ryff’s conceptualization of psychological well-being, for example, includes purpose in life, personal growth, self-acceptance, positive relations with others, autonomy, and environmental mastery [
10]. The model put forward by Keyes’ includes one’s affective state, numerous aspects of psychological well-being (as per Ryff), and also social functioning [
11]. Seligman’s PERMA model considers positive emotions, engagement, relationships, meaning, and achievement [
12]. A measure put forward by Su et al. [
14] is yet more extensive. However, many of the flourishing domains are common across conceptualizations and measures. While we will focus here on matters of how AI technologies may shape domains of happiness, health, meaning, character, relationships, and financial security, and these domains will be used in part to select the topics discussed in the following sections, many of these domains are included in other conceptualizations and measures of flourishing as well and so these considerations would be equally applicable to them. Moreover, for additional or other flourishing domains, a similar approach to what we are proposing here could be employed with application to other outcomes and measures.
In what follows we will thus apply this flourishing lens to questions concerning AI technologies with regard to thinking about (i) the output provided by LLMs; (ii) the specific AI product design; (iii) our engagement with those products; and the effects this is having on (iv) human knowledge; and on (v) the self-realization of the human person. These five sets of considerations concerning the development and deployment of AI products do not address every aspect of the human experience with these emerging technologies, but provide a reasonably general framework for assessing whether AI products and their uses are conducive or detrimental to human flourishing.
3. Flourishing and Product Design
Related to the principles guiding the responses provided by LLMs is the closely related question of product design, including the user interface, which informs and guides the user’s employment of the underlying technology. It is without question that some LLM applications have and will continue to bring benefits to their users. The capacity of AI technologies to function as a more advanced search engine can help users more quickly uncover important source material. AI technologies have already proven useful in the generation of computer code. They can be helpful in building travel itineraries. The possibilities can sometimes seem endless. However, in thinking about AI products and flourishing, we believe each product should be considered both with regard to the short-term and long-term effects on flourishing of the user, and also with respect to potential externalities of the product itself.
With regard to the effects on the flourishing of users, we believe there are cases in which the use of a particular product is either an unambiguous good or at the very least relatively neutral and unlikely to impede flourishing; there are other cases in which the effects on flourishing are very likely to ultimately be detrimental; and finally, there are more difficult cases in which some engagement with the product may be helpful but after a certain threshold, engagement becomes counterproductive. Examples of the first case of relatively unambiguously positive or neutral effects on flourishing might include many uses of LLMs as search engines, or with help in planning travel itineraries or train schedules. Applications of AI products assisting in statistical analyses, or in civil engineering tasks, or applications of AI-assisted robotic surgery might also constitute examples of this first type. Such applications might advance the health, well-being, and financial security of individuals and communities, with very little or no downside. We should identify and seek out such applications.
Examples of the second case of clear detrimental effects on flourishing are perhaps most evident in certain types of chatbots, including social and relational chatbots, that are being designed [
18]. One recent study indicated that 33% of American teenagers report using AI companions for social interaction and relationships [
19]; another study indicated that 52% of teens were regular users of AI companions and that 72% had tried an AI companion at least once [
20]. While such technologies may temporarily alleviate loneliness, the longer-term effects on flourishing are likely detrimental [
21]. They decrease motivation and time available for engaging in face-to-face interactions. They create unrealistic expectations as to the sort of interactions, sympathy, and comfort one may hope for in a romantic partner or a friendship [
22,
23]. They thereby likely inhibit the user’s capacity for face-to-face relationships. This in turn alters the broader social environment and our capacities to engage with one another. The long-term effects of such products would seem to diminish societal flourishing considerably by impeding real-world relationships. Such relational chatbots likely also have a role in identity formation which may affect a person’s sense of meaning [
24,
25,
26]. Social relationships moreover have powerful effects on other aspects of flourishing such as happiness, meaning, and character [
27,
28], and thus a weaker set of social relationships will likely also alter these other aspects of flourishing. We believe the development of most or all relational and social chatbot products should thus be discontinued. Developers of such products should be required to justify their development, and held morally (and legally) accountable if the products developed hinder human relationships.
The specific user interface, of course, also matters, and can, in some instances, alter whether a product is helpful or detrimental. For example, while there may be laudable applications of AI products in assisting with statistical analyses, chatbot applications that are created specifically to solve student homework assignments, for example, are likely to impede learning. Here, the algorithms themselves, in contrasting these two cases, may be nearly identical but the product design in one case may be conducive to flourishing, and in the other impede it.
The third more difficult intermediate case concerns settings in which some engagement with AI technologies may be beneficial, but for which further engagement becomes counter-productive. This is perhaps notably relevant in questions of “skill-building.” There have been applications, for instance, of AI in helping autistic children to have more normal social interactions [
29] or to have students develop capacities for civil discourse across differences [
30]. These supports may go on to positively affect other aspects of flourishing such as social relationships, meaning, character, and knowledge. Such applications may well be beneficial in initially developing such skills, but users should arguably still ultimately be redirected to face-to-face interactions. If users become dependent on the technology, or use such technology to avoid face-to-face interactions, or develop unrealistic views of what may be expected from actual people, then, once again, individual and societal flourishing is likely to be impeded. Related cases may arise in mental health counseling or education. Carefully constructed mental health AI tools may eventually surpass the effectiveness of Cognitive Behavior Therapy tools, already known to be effective from randomized trials [
31], and some use of these may indeed be beneficial. Nevertheless, these cannot replace the human care provided in counseling. Healthcare concerns not only the provision of services, but a compassionate caring for the person in need, which is a distinctively human activity. Likewise, educational chatbots may well eventually prove to be superior and more efficient than, say, massive open online courses (MOOCs), and may be worth employing in various contexts. However, a relationship with a human teacher will still be critical in helping to form the whole person, in modeling the integration of knowledge into life and emotion, and in developing the capacities for mutual understanding and for human interaction and exchange of ideas [
6,
32]. We ought ultimately to be concerned not only about knowledge and cognitive capacities within education but also matters of meaning, character, relationships, and fulfillment.
While discerning which of these three cases a particular AI product may fall into, and also the proper bounds in the third case, will not always be straightforward, we believe some consideration of these issues before AI products are developed would be valuable. Another, perhaps yet more difficult, set of considerations in evaluating specific productization of AI technologies concerns the externalities that such products may create. While the use of AI technologies to improve and more efficiently create computer code does, at one level, seem like an unambiguous good, the effects on the lives, well-being, and financial security of software engineers, sometimes rendering them redundant, can, of course, be profound. Similar considerations pertain to teachers, to mental health counselors, and to many others in diverse sectors. Of course, this phenomenon is in no way novel to AI technologies, and similar challenges have been present throughout the history of technological innovation [
33]. Nevertheless, consideration, in advance, of such effects and what might be done, or envisioning alternative vocational paths for those who may be affected by such technologies, would be valuable.
5. Consequences for Human Knowledge
A fourth set of considerations for trying to ensure that AI technologies are conducive to, rather than impeding, flourishing concerns knowledge itself. The capacity for LLMs to summarize and synthesize vast amounts of information is astounding, and they are rightly valued for their capacity to do so. However, the limitations, and the implications of these limitations, need to be considered if these technologies are to ultimately advance human knowledge and understanding, rather than impede it. Of particular concern is the reality of hallucinations of LLMs. While much of the LLM output is based on reliable sources and summaries, it has also become clear, given that these models are simply an elaborate and powerful next-token prediction device, that they will also often generate responses that are simply wrong. If too much trust is placed in such LLM responses—if the responses are considered knowledge—our grasp of what is true will inevitably suffer. Knowledge itself might ultimately be understood as justified, true belief, the evidence for which cannot be overturned. For knowledge to advance, we need to regularly assess whether the evidence provided for a particular statement can, or cannot, be overturned. Otherwise, human knowledge may well be impeded, not enhanced. Moreover, since AI-generated content is in some cases fed back into the training models, the possibility for the proliferation of error is substantial. Poorer knowledge and capacity to assess knowledge will in the long term affect also our capacities to improve health, happiness, community, and other aspects of well-being. Moreover, if LLMs begin to shape content around learned preferences of users, the problems could become yet worse. There may be dangers not only with respect to knowledge, but also with yet more political and social polarization and societal discord cf. [
35].
Such dangers, and their effects not only with respect to the individual user, but on the propagation of error throughout society, need to be taken seriously, and efforts should be made to address such dangers. Certainly, if such “hallucinations” can be reduced, we should make efforts to do so. Mitigation techniques such as retrieval-augmented generation (RAG) and verification layers can be employed [
36,
37], though there are limitations [
38,
39]. It can be useful also to evaluate and report hallucination rates across different platforms, both so as to inform users and to create greater competition to reduce such rates, as the Vectara Hallucination Leaderboard has done [
40]. However, recent evidence and arguments have suggested that such hallucinations may simply be an intrinsic part of LLM operation that we may never be able to eliminate entirely [
41,
42]. For this reason, other approaches are also needed. Critically, on the part of the LLM technologies themselves, it should become standard for LLM responses to make reference to underlying source material so that users can check the material, and try to evaluate the reliability of the sources. Systems might also begin to automatically provide numeric assessments of the likely accuracy of the information provided, further enabling users to evaluate and check the reliability of LLM responses. Warnings of possible inaccuracies could also become routine. When specific AI applications are being employed to create documents, or generate code, or to analyze data, it might be desirable, in institutional settings, to require users first demonstrate mastery of the particular task on their own before allowing the use of AI technologies to assist. This would better ensure that users have the capacity to evaluate the output of such technologies.
If knowledge is justified, true belief, the evidence for which cannot be overturned, then in order to advance knowledge we need to regularly evaluate the extent and quality of the evidence. AI technologies certainly do have the potential to expand our access to knowledge. However, given their current limitations, we arguably now should require more critical scrutiny of the output, and more transparency from LMMs, so that we can evaluate whether the evidence can, or cannot, be potentially overturned. Evaluating evidence is a distinctively human activity, and we must not lose the capacity for it. In fact, the need for it now is arguably even greater than before. If LLMs are to genuinely be conducive to human knowledge—and the potential here is substantial—such matters must be taken seriously.
6. AI and the Flourishing Person
The reflections on human knowledge above also point towards a final overarching set of considerations with regard to flourishing and AI technologies that extends beyond knowledge: we must consistently return to the question of what it is to flourish as a human person and what sorts of activities we simply cannot “outsource” without sacrificing our own flourishing. Amongst others, we would argue that we cannot outsource human reason, human creativity, human relationships, human meaning, and human joy without also ultimately giving up on the good life and human flourishing. AI technologies—and our engagement with them—if they are to be conducive to flourishing, should not compromise these aspects of human life.
Reasoning itself is one activity that makes us human, that allows us to flourish, and that is even constitutive of flourishing. Across centuries, many have pointed to human capacity for reason and rational discourse through language as a central part of human life that makes us distinct from other creatures [
43]. We cannot give up on, or outsource, human reason without also giving up on our flourishing. Our engagement with AI technologies must be done in a way that is conducive to human reasoning rather than superseding or impeding it. And current evidence does indeed suggest use of these technologies can impair cognitive functioning [
44]. Careful consideration of what is generated in LLM responses, and evaluation of whether they constitute evidence that can be the basis for knowledge, is one way to help ensure this. Ideally, even when AI technologies are used to assist with reasoning, or to improve writing, users should habitually spend time afterwards discerning what they might learn about how they can carry out such activities better themselves. We should reflect upon ways to ensure our uses of AI do not lead to an atrophy of the mind. Continually asking ourselves, with each use, what we might do to prevent the weakening of our mind is arguably critical if we are to preserve and strengthen our capacity for reason. If someone is not willing to ask such questions with each and every use, the person may ultimately be better off not employing AI technologies at all. We are worse off as human persons if such uses lead to weaker minds. Such considerations are also important in thinking about education. Students making use of LLMs and other AI technologies to complete homework assignments are less likely to learn the skills or information the assignment is designed to reinforce. Students who routinely use such technologies to generate essays will likely not be able to write as well themselves. As noted above, there may well be beneficial uses of AI technologies in education. However, we should continually evaluate whether such technologies are fostering, or diminishing, our capacity for reasoning and communication. These are distinctively human capacities that enable and constitute our flourishing. We must not give them up and we must ensure our educational systems, and our individual practices, help foster these capacities. Viewed positively, the question of the appropriate use and limits of AI technologies in education might provide greater discussion of, and clarity around, the ends of education in human formation and human flourishing [
45].
Likewise, our creative capacities are part of what makes us human and what allows us to flourish. Attempting to draw, or write a poem, or compose or perform a musical piece, is very different from commanding an AI technology to produce the same. Even if we think the AI product looks or sounds better, we have left behind the actual carrying out of the creative work that can itself be so satisfying. Our very attempts at such creative activity, even if sometimes unsuccessful or frustrating, can help develop patience and fortitude, can over time allow us to develop skills, can better enable our creative capacities, and ultimately allow us to better attain our human potential. Again, the use of our creative capacities simply is part of what make us human and part of what not only advances, but constitutes, our flourishing. We should embrace our own efforts—even amateur efforts—in this regard. We should not allow the seemingly “more professional” outputs of these technologies to deter us from our own creative pursuits. We should appreciate and value the distinctively human aspects of our creative endeavors and their products, along with the social fabric within which these are embedded. Just as one might well prefer a gift thoughtfully chosen by one’s spouse to something automatically selected and generated by an AI system without the spouse’s knowledge, so too we should appreciate the distinctively human elements of creative activity and products as a gift to the human community. If we neglect these matters, we will not flourish as well, individually or collectively.
As discussed above, we also cannot flourish if we attempt to outsource relationships to technologies. Our collective experience of the consequences of social media use should have already made this clear [
46], but AI technologies arguably pose a yet greater threat with their capacity to partially simulate human conversations and interactions cf. [
47,
48,
49,
50,
51,
52,
53,
54]. Social relationships—real social relationships, with real mutual affection and care—are a part of our flourishing. Perhaps especially on this point, we need vigilance. We need to ensure we are not replacing social relationships with chatbots. We need AI technologies that are constantly pointing us back to real relationships, not luring us away from them. We need to ensure that our engagement with technology is still allowing us time to invest in people and to invest in communities.
Ultimately, we cannot outsource our meaning and our joy in relationships, our freedom and our responsibility, our reasoning and our understanding, our appreciation of beauty and our capacity for awe and wonder, or, in total, our flourishing, to technology. These things are a part of what it means to be human. We must thus consistently come back to these questions of whether the technologies we are developing—and our use of them—are helping us to flourish as human persons, or are potentially putting at risk part of what it is that makes us human. While AI technologies may have many instrumental benefits, we must protect ourselves from uses that threaten what is intrinsic to our flourishing as human persons.
7. Conclusions
This paper has provided five sets of principled considerations relating AI technology, and its specific productizations, to human flourishing, in order to help guide the development, use, and potential restrictions of those technologies. The paper put forward a flourishing framework including domains of happiness, health, meaning, character, relationships, and financial security to help evaluate AI technologies, though certainly similar approaches could be employed with other flourishing conceptualizations as well. The flourishing domains also helped motivate sets of considerations around flourishing and AI. The considerations—the LLM responses, product design, user engagement, the effect AI technologies are having on human knowledge, and, lastly, on what it is to be human—we believe should be a critical part of the framing of debates around AI. We make no claim that these are exhaustive sets of considerations, but we believe each of them must be taken seriously if these technologies are to help promote, rather than impede, flourishing. We do not think this will be an easy task, but it is a necessary one.
We believe certain points and conclusions from our discussion above are actionable already. With regard to product developers: (i) designing LLMs to provide regular reminders that they are not human, and also separately that the user may want to consider alternative activities or in-person human interactions could help individuals make more prudential decisions about their own use. As we discuss further below, developers could also (ii) consider discontinuing and ceasing to promote all relational chatbots. Finally, developers we believe should (iii) spend the vast majority of their effort on developing products that seem to have unambiguously beneficial or neutral effects, and prioritize those uses over products that might be more mixed or detrimental, even if profitable. On the side of the individual user, we would suggest, from the discussion above: (i) a continual questioning, with each use, whether it is enhancing or inhibiting one’s capacities as a human person, (ii) a continual limiting of use to ensure sufficient time to be with other people and in communities in person, and a discontinuing of all use of relational chatbots, and (iii) a commitment to verifying knowledge claims of LLM systems (it does not take much effort to click on a website to begin to verify what is put forward, and while the extent of such engagement that is worthwhile will vary by context, a deeper commitment to do so will be essential if the web of human knowledge is not to be damaged). Some of these proposals, especially on the developer side, may involve challenges in implementation or have unanticipated consequences and further work would of course be needed to assess this.
Many open questions remain. We have put forward proposals in this paper that would be amenable to empirical study to rigorously evaluate, with observational data or randomized trials, both effect sizes and matters of feasibility. It is not clear what magnitude of effect the proposed LLM response reminders above might have in shaping use and how this might vary by context. We had also proposed that businesses might require users to demonstrate mastery before employing AI tools. The effects of such policies on longer-term productivity could be evaluated. As AI-guided mental health applications advance, these should be evaluated, ideally in randomized trials, and compared with, for example, eCBT systems. Again, perhaps more controversially, we believe individuals would ultimately benefit from the discontinuation of all relational chatbots. Empirical studies evaluating the effects of such relational chatbots, both on short-term loneliness, but also, critically, on longer-term loneliness and relationship formation would be valuable in providing important evidence for making such determinations. It is, however, more challenging to evaluate the effects of relational chatbots on the overall social fabric and the resulting spillover effects of such technologies on relationship opportunities that may arise even for those who choose to refrain from the technologies themselves.
Important questions also remain regarding how the considerations identified in this paper may vary across cultures and contexts. Much of human nature, and much of flourishing may be relatively universal [
9] but there are certainly elements that are culturally specific. There is cross-cultural variability in AI risks, benefits, and access. Integrating global perspectives—especially from low-resource or non-Western contexts, and further empirical research on these matters would be valuable. However, the more this work and reflection can be carried out now in contexts where AI technologies are already being used or where use is rapidly increasing, the better prepared we will be to address other contexts as well.
AI technologies hold tremendous potential. However, if the benefits are mainly instrumental, but the threats intrinsic, to what constitutes human flourishing, then we need to be constantly exercising discernment and prudential judgement. We need to frame our thinking on AI technologies around flourishing, and come back to these questions again and again, if we want to ensure a truly flourishing society.