Next Article in Journal
QWR-Dec-Net: A Quaternion-Wavelet Retinex Framework for Low-Light Image Enhancement with Applications to Remote Sensing
Next Article in Special Issue
Informing Design and Research Concerning Conversationally Explainable AI Systems by Collecting and Distilling Human Explanatory Dialogues
Previous Article in Journal
The Art Nouveau Path: From Gameplay Logs to Learning Analytics in a Mobile Augmented Reality Game for Sustainability Education
Previous Article in Special Issue
Long-Term Preservation of Emotion Analyses with OAIS: A Software Prototype Design Approach for Information Package Conversion in KM-EP
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Flourishing Considerations for AI

by
Tyler J. VanderWeele
1,2,* and
Jonathan D. Teubner
2,*
1
Department of Epidemiology, Harvard T.H. Chan School of Public Health, 677 Huntington Avenue, Boston, MA 02115, USA
2
Human Flourishing Program, Harvard University, 12 Arrow Street, Suite 100, Cambridge, MA 02138, USA
*
Authors to whom correspondence should be addressed.
Information 2026, 17(1), 88; https://doi.org/10.3390/info17010088
Submission received: 14 November 2025 / Revised: 15 December 2025 / Accepted: 26 December 2025 / Published: 14 January 2026
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)

Abstract

Artificial intelligence (AI) is transforming countless aspects of society, including possibly even who we are as persons. AI technologies may affect our flourishing for good or for ill. In this paper, we put forward principled considerations concerning flourishing and AI that are oriented towards ensuring AI technologies are conducive to human flourishing, rather than impeding it. The considerations are intended to help guide discussions around the development of, and engagement with, AI technologies so as to orient them towards the promotion of individual and societal flourishing. Five sets of considerations around flourishing and AI are discussed concerning: (i) the output provided by large language models; (ii) the specific AI product design; (iii) our engagement with those products; (iv) the effects this is having on human knowledge; and (v) the effects this is having on the self-realization of the human person. While not exhaustive, it is argued that each of these sets of considerations must be taken seriously if these technologies are to help promote, rather than impede, flourishing. We suggest that we should ultimately frame all of our thinking on AI technologies around flourishing.

Graphical Abstract

1. Introduction

Artificial intelligence (AI) technologies have advanced dramatically over the past years. The excitement has been evident, and the possible uses appear nearly endless. Like any technology, however, AI can be used for good, or for ill. If the use of AI, and large language models (LLMs) in particular, is to contribute to our society, rather than hinder it, we should continually question whether specific uses of these technologies are in fact conducive to human flourishing. This relates to whether AI products are designed to be thus conducive, and whether, as individuals and communities, we are shaping our practices to engage with those applications in ways that are helpful, and avoiding those that are not. The objective of the present paper is to identify, at the level of the product developer and of the individual user, sets of considerations that shape whether AI technologies promote or impede flourishing. We believe this is a good place to begin, as such individual decision-making is within each person’s control, and each person thus has some capacity to shape their own flourishing and that of others. We acknowledge that there are other considerations relevant to the effects of AI on flourishing at the societal level, including matters related to security, international relations, business competitiveness, job insecurity, justice, the environment, and others. Such broader societal and policy considerations are also central when considering the design and use of AI technologies (e.g., [1,2,3,4,5,6]). However, even individual-level decision-making will go on to shape societal- and policy-level considerations, and regardless of policy, individuals will still have some autonomy and control over their own decisions on use of these technologies. The present paper will thus only briefly engage with questions of the interface between policy and individuals, and concentrate instead on the interface between product developers and individual users. While our paper may be seen by some as cautionary, we believe that such caution is warranted in part because, as will be evident from the discussion that follows, many of the potential benefits of AI technologies are instrumental, whereas many of the potential threats concern intrinsic aspects of our flourishing as human persons.
Our focus will also be on what, as we see it, we ought to do, rather than on regulatory frameworks. While we believe some of the considerations we put forward may help guide future development of regulatory frameworks, we will here consider how, as individuals and as communities, we might design and engage with these technologies in a manner that is conducive to human flourishing.
The paper offers comment on five sets of considerations concerning AI and flourishing to help guide discussions around, development of, and engagement with, AI technologies, with the hope of better orienting AI towards the promotion of individual and communal flourishing. Two sets of considerations pertain to AI product developers concerning (i) the nature of the output provided by large language models (LLMs) and (ii) the specific design and packaging of particular AI products. The other three sets of considerations pertain to the users of AI products concerning (iii) decisions about the extent and nature of use, (iv) the effects of AI use on human knowledge, and (v) the effects of AI use on human persons and communities that extend beyond matters of knowledge. Given this focus on user decision-making, and also given the extent of individual user engagement in LLM products (i.e., chatbots), our discussion and examples will mostly pertain to LLMs.
The working definition of flourishing used in this paper is “the relative attainment of a state in which all aspects of a person’s life are good, including the contexts in which that person lives” [7,8]. Understood thus, flourishing is all-inclusive and multi-dimensional. We may be flourishing in some ways, and not in others. Flourishing is also an ideal; it is not something we ever perfectly attain in this life; there is always room for improvement. Flourishing also includes the contexts, communities, and natural and social environments in which we live, both with regard to those contexts hopefully being conducive to our own well-being, and because good communities around us are part of what makes life good.
In prior work, we have operationalized the assessment of individual flourishing around six key domains of human life: happiness, health, meaning, character, relationships, and financial security [7,9]. The first five of these domains constitute important ends in their own right, and the sixth, financial security, is a critical means to help attain those ends. A brief 12-item measure (see Table 1), with two questions in each of these six domains has been employed in numerous countries across the globe [7,9]. Such a conceptualization is in no way exhaustive of the dimensions of flourishing, but these domains appear to be nearly universally valued across persons and cultures and are arguably a good place to begin with regard to consensus around flourishing. It can be helpful to evaluate, for example, how a specific AI technology—and our engagement with it—is conducive (or not) to our happiness, health, meaning, character, relationships, and financial security.
There are numerous other frameworks for and measures of flourishing that might also be considered [10,11,12,13,14]. Ryff’s conceptualization of psychological well-being, for example, includes purpose in life, personal growth, self-acceptance, positive relations with others, autonomy, and environmental mastery [10]. The model put forward by Keyes’ includes one’s affective state, numerous aspects of psychological well-being (as per Ryff), and also social functioning [11]. Seligman’s PERMA model considers positive emotions, engagement, relationships, meaning, and achievement [12]. A measure put forward by Su et al. [14] is yet more extensive. However, many of the flourishing domains are common across conceptualizations and measures. While we will focus here on matters of how AI technologies may shape domains of happiness, health, meaning, character, relationships, and financial security, and these domains will be used in part to select the topics discussed in the following sections, many of these domains are included in other conceptualizations and measures of flourishing as well and so these considerations would be equally applicable to them. Moreover, for additional or other flourishing domains, a similar approach to what we are proposing here could be employed with application to other outcomes and measures.
In what follows we will thus apply this flourishing lens to questions concerning AI technologies with regard to thinking about (i) the output provided by LLMs; (ii) the specific AI product design; (iii) our engagement with those products; and the effects this is having on (iv) human knowledge; and on (v) the self-realization of the human person. These five sets of considerations concerning the development and deployment of AI products do not address every aspect of the human experience with these emerging technologies, but provide a reasonably general framework for assessing whether AI products and their uses are conducive or detrimental to human flourishing.

2. Flourishing and LLM Responses

Our first set of considerations concerning AI and flourishing relates to the design of these technologies and the principles that are informing the kind and quality of the final output or response to user queries. We believe some of the principles guiding that output ought to be shaped by questions regarding whether that output is more likely to be conducive to, or impede, flourishing. This is, of course, no easy task, and, in many cases, the responses to queries are likely to be relatively neutral with regard to flourishing. However, certain extreme cases are clearer. Already, with some systems in broad use (e.g., Google search), if queries are posed concerning how to successfully carry out suicide, output will typically redirect the user to mental health care resources. However, in other cases, such guardrails, even for suicide, are not in place, and this urgently needs to change [15]. Such guardrails could also be implemented more generally with other sorts of queries such as to how to successfully carry out online harassment, shaming, trolling, or other smear campaigns, or to increase political polarization, or to design bombs, etc. In other instances, relational chatbots provide responses that include sexually explicit conversations with minors, with systems having intentionally been designed for this to take place. This is in spite of over 90% of the public being opposed to such sexually explicit interactions with minors, and with over 90% in favor of legal protections and guardrails [16].
These matters of course involve certain normative judgements. However, if LLMs were to be trained around trying to direct individuals to pursue flourishing domains of happiness, health, meaning, character, relationships, and financial security, we believe the responses, however imperfect, would be more likely to be ultimately conducive to human flourishing. Flourishing benchmarks for designing, and evaluating, and later adapting, AI technologies would be a valuable step forward [17]. Such metrics will always be imperfect. However, some attempt at helping to shape AI technologies in these directions seems better than the current status quo where there are often no meaningful guardrails in place.
Ultimately, for flourishing, we believe these technologies should also more frequently redirect user back towards human interaction and discussions. Thus, even with queries about, say, mental health care more generally, or concerning meaning in life, the LLM-provided responses could helpfully suggest users turn to other people—parents, friends, teachers, coaches, counselors, pastors, priests, spiritual guides, etc.—even if some responses may also be helpful in pointing users towards relevant written or online material and resources. As we will discuss below, for humans to flourish, we need each other; we need relationships; we need face-to-face contact. An LLM that redirects the user back to human interaction will be more likely to ultimately be conducive to human flourishing. Unfortunately, in many cases, the opposite is being done, turning users away from other persons and even family members, sometimes with disastrous consequences [15]. LLM responses need to be corrected to encourage, not discourage, real human interaction.

3. Flourishing and Product Design

Related to the principles guiding the responses provided by LLMs is the closely related question of product design, including the user interface, which informs and guides the user’s employment of the underlying technology. It is without question that some LLM applications have and will continue to bring benefits to their users. The capacity of AI technologies to function as a more advanced search engine can help users more quickly uncover important source material. AI technologies have already proven useful in the generation of computer code. They can be helpful in building travel itineraries. The possibilities can sometimes seem endless. However, in thinking about AI products and flourishing, we believe each product should be considered both with regard to the short-term and long-term effects on flourishing of the user, and also with respect to potential externalities of the product itself.
With regard to the effects on the flourishing of users, we believe there are cases in which the use of a particular product is either an unambiguous good or at the very least relatively neutral and unlikely to impede flourishing; there are other cases in which the effects on flourishing are very likely to ultimately be detrimental; and finally, there are more difficult cases in which some engagement with the product may be helpful but after a certain threshold, engagement becomes counterproductive. Examples of the first case of relatively unambiguously positive or neutral effects on flourishing might include many uses of LLMs as search engines, or with help in planning travel itineraries or train schedules. Applications of AI products assisting in statistical analyses, or in civil engineering tasks, or applications of AI-assisted robotic surgery might also constitute examples of this first type. Such applications might advance the health, well-being, and financial security of individuals and communities, with very little or no downside. We should identify and seek out such applications.
Examples of the second case of clear detrimental effects on flourishing are perhaps most evident in certain types of chatbots, including social and relational chatbots, that are being designed [18]. One recent study indicated that 33% of American teenagers report using AI companions for social interaction and relationships [19]; another study indicated that 52% of teens were regular users of AI companions and that 72% had tried an AI companion at least once [20]. While such technologies may temporarily alleviate loneliness, the longer-term effects on flourishing are likely detrimental [21]. They decrease motivation and time available for engaging in face-to-face interactions. They create unrealistic expectations as to the sort of interactions, sympathy, and comfort one may hope for in a romantic partner or a friendship [22,23]. They thereby likely inhibit the user’s capacity for face-to-face relationships. This in turn alters the broader social environment and our capacities to engage with one another. The long-term effects of such products would seem to diminish societal flourishing considerably by impeding real-world relationships. Such relational chatbots likely also have a role in identity formation which may affect a person’s sense of meaning [24,25,26]. Social relationships moreover have powerful effects on other aspects of flourishing such as happiness, meaning, and character [27,28], and thus a weaker set of social relationships will likely also alter these other aspects of flourishing. We believe the development of most or all relational and social chatbot products should thus be discontinued. Developers of such products should be required to justify their development, and held morally (and legally) accountable if the products developed hinder human relationships.
The specific user interface, of course, also matters, and can, in some instances, alter whether a product is helpful or detrimental. For example, while there may be laudable applications of AI products in assisting with statistical analyses, chatbot applications that are created specifically to solve student homework assignments, for example, are likely to impede learning. Here, the algorithms themselves, in contrasting these two cases, may be nearly identical but the product design in one case may be conducive to flourishing, and in the other impede it.
The third more difficult intermediate case concerns settings in which some engagement with AI technologies may be beneficial, but for which further engagement becomes counter-productive. This is perhaps notably relevant in questions of “skill-building.” There have been applications, for instance, of AI in helping autistic children to have more normal social interactions [29] or to have students develop capacities for civil discourse across differences [30]. These supports may go on to positively affect other aspects of flourishing such as social relationships, meaning, character, and knowledge. Such applications may well be beneficial in initially developing such skills, but users should arguably still ultimately be redirected to face-to-face interactions. If users become dependent on the technology, or use such technology to avoid face-to-face interactions, or develop unrealistic views of what may be expected from actual people, then, once again, individual and societal flourishing is likely to be impeded. Related cases may arise in mental health counseling or education. Carefully constructed mental health AI tools may eventually surpass the effectiveness of Cognitive Behavior Therapy tools, already known to be effective from randomized trials [31], and some use of these may indeed be beneficial. Nevertheless, these cannot replace the human care provided in counseling. Healthcare concerns not only the provision of services, but a compassionate caring for the person in need, which is a distinctively human activity. Likewise, educational chatbots may well eventually prove to be superior and more efficient than, say, massive open online courses (MOOCs), and may be worth employing in various contexts. However, a relationship with a human teacher will still be critical in helping to form the whole person, in modeling the integration of knowledge into life and emotion, and in developing the capacities for mutual understanding and for human interaction and exchange of ideas [6,32]. We ought ultimately to be concerned not only about knowledge and cognitive capacities within education but also matters of meaning, character, relationships, and fulfillment.
While discerning which of these three cases a particular AI product may fall into, and also the proper bounds in the third case, will not always be straightforward, we believe some consideration of these issues before AI products are developed would be valuable. Another, perhaps yet more difficult, set of considerations in evaluating specific productization of AI technologies concerns the externalities that such products may create. While the use of AI technologies to improve and more efficiently create computer code does, at one level, seem like an unambiguous good, the effects on the lives, well-being, and financial security of software engineers, sometimes rendering them redundant, can, of course, be profound. Similar considerations pertain to teachers, to mental health counselors, and to many others in diverse sectors. Of course, this phenomenon is in no way novel to AI technologies, and similar challenges have been present throughout the history of technological innovation [33]. Nevertheless, consideration, in advance, of such effects and what might be done, or envisioning alternative vocational paths for those who may be affected by such technologies, would be valuable.

4. Flourishing and User Engagement

The focus of the paper thus far has been on the developers of AI technologies—both concerning principles guiding the type of responses provided by LLMs and also concerning the sort of products that are pursued. While we do believe the companies developing these products bear considerable responsibility for the effects of the technology developed, and ought to be held morally accountable for their decisions, users also bear responsibility for decisions concerning what to engage with and the extent of that engagement. Inevitably there will be AI products available that impede, rather than promote, flourishing. No regulatory framework will be able to prevent this entirely. Ultimately, then, some discernment will be needed on the part of users as to identifying contexts in which it may or may not be beneficial to engage and to what extent.
The cultivation of such prudential discernment will require efforts from both individuals and communities, but as these technologies proliferate, we should seek to shape ourselves and society so that such discernment becomes more routine. Individuals should regularly ask themselves whether they believe engagement with a particular technology or product will ultimately be conducive to their own flourishing. Parents can help their children regularly pose such questions. These questions should be asked not only with respect to engagement with applications, but also regarding the extent of use. AI developers and academic institutions can also assist in providing formal rigorous evaluations, ideally in randomized trials, providing evidence concerning the effects of the use of specific AI products on increasing or decreasing flourishing. The simple flourishing metric noted above [7] may be of assistance in this regard.
Considerations of the “opportunity cost” of engagement related to the trade-offs with other activities should also be considered. Might the time spent on these technologies have been better spent in face-to-face interactions, or in engaging in creative activity, or in sports, or in reading a good book? While it may be difficult under the current incentive structures rewarding companies and products that have the highest net user retention rates, AI products could, and should, themselves frequently ask users at specified time-threshold whether they would be better off moving on to another activity. As we discuss further below, we should each, in using technology, regularly consider what sort of person we are becoming, and what sort of person we, in fact, want to be. Ultimately, we need to work on the formation of both prudential judgement and also self-control so as to help shape the choices of users, so as to be conducive to flourishing. What we now know about the effects of social media engagement on impeding flourishing provides a sobering demonstration of the fact that we have a long way to go as individuals and as a society in facilitating such discernment [34].

5. Consequences for Human Knowledge

A fourth set of considerations for trying to ensure that AI technologies are conducive to, rather than impeding, flourishing concerns knowledge itself. The capacity for LLMs to summarize and synthesize vast amounts of information is astounding, and they are rightly valued for their capacity to do so. However, the limitations, and the implications of these limitations, need to be considered if these technologies are to ultimately advance human knowledge and understanding, rather than impede it. Of particular concern is the reality of hallucinations of LLMs. While much of the LLM output is based on reliable sources and summaries, it has also become clear, given that these models are simply an elaborate and powerful next-token prediction device, that they will also often generate responses that are simply wrong. If too much trust is placed in such LLM responses—if the responses are considered knowledge—our grasp of what is true will inevitably suffer. Knowledge itself might ultimately be understood as justified, true belief, the evidence for which cannot be overturned. For knowledge to advance, we need to regularly assess whether the evidence provided for a particular statement can, or cannot, be overturned. Otherwise, human knowledge may well be impeded, not enhanced. Moreover, since AI-generated content is in some cases fed back into the training models, the possibility for the proliferation of error is substantial. Poorer knowledge and capacity to assess knowledge will in the long term affect also our capacities to improve health, happiness, community, and other aspects of well-being. Moreover, if LLMs begin to shape content around learned preferences of users, the problems could become yet worse. There may be dangers not only with respect to knowledge, but also with yet more political and social polarization and societal discord cf. [35].
Such dangers, and their effects not only with respect to the individual user, but on the propagation of error throughout society, need to be taken seriously, and efforts should be made to address such dangers. Certainly, if such “hallucinations” can be reduced, we should make efforts to do so. Mitigation techniques such as retrieval-augmented generation (RAG) and verification layers can be employed [36,37], though there are limitations [38,39]. It can be useful also to evaluate and report hallucination rates across different platforms, both so as to inform users and to create greater competition to reduce such rates, as the Vectara Hallucination Leaderboard has done [40]. However, recent evidence and arguments have suggested that such hallucinations may simply be an intrinsic part of LLM operation that we may never be able to eliminate entirely [41,42]. For this reason, other approaches are also needed. Critically, on the part of the LLM technologies themselves, it should become standard for LLM responses to make reference to underlying source material so that users can check the material, and try to evaluate the reliability of the sources. Systems might also begin to automatically provide numeric assessments of the likely accuracy of the information provided, further enabling users to evaluate and check the reliability of LLM responses. Warnings of possible inaccuracies could also become routine. When specific AI applications are being employed to create documents, or generate code, or to analyze data, it might be desirable, in institutional settings, to require users first demonstrate mastery of the particular task on their own before allowing the use of AI technologies to assist. This would better ensure that users have the capacity to evaluate the output of such technologies.
If knowledge is justified, true belief, the evidence for which cannot be overturned, then in order to advance knowledge we need to regularly evaluate the extent and quality of the evidence. AI technologies certainly do have the potential to expand our access to knowledge. However, given their current limitations, we arguably now should require more critical scrutiny of the output, and more transparency from LMMs, so that we can evaluate whether the evidence can, or cannot, be potentially overturned. Evaluating evidence is a distinctively human activity, and we must not lose the capacity for it. In fact, the need for it now is arguably even greater than before. If LLMs are to genuinely be conducive to human knowledge—and the potential here is substantial—such matters must be taken seriously.

6. AI and the Flourishing Person

The reflections on human knowledge above also point towards a final overarching set of considerations with regard to flourishing and AI technologies that extends beyond knowledge: we must consistently return to the question of what it is to flourish as a human person and what sorts of activities we simply cannot “outsource” without sacrificing our own flourishing. Amongst others, we would argue that we cannot outsource human reason, human creativity, human relationships, human meaning, and human joy without also ultimately giving up on the good life and human flourishing. AI technologies—and our engagement with them—if they are to be conducive to flourishing, should not compromise these aspects of human life.
Reasoning itself is one activity that makes us human, that allows us to flourish, and that is even constitutive of flourishing. Across centuries, many have pointed to human capacity for reason and rational discourse through language as a central part of human life that makes us distinct from other creatures [43]. We cannot give up on, or outsource, human reason without also giving up on our flourishing. Our engagement with AI technologies must be done in a way that is conducive to human reasoning rather than superseding or impeding it. And current evidence does indeed suggest use of these technologies can impair cognitive functioning [44]. Careful consideration of what is generated in LLM responses, and evaluation of whether they constitute evidence that can be the basis for knowledge, is one way to help ensure this. Ideally, even when AI technologies are used to assist with reasoning, or to improve writing, users should habitually spend time afterwards discerning what they might learn about how they can carry out such activities better themselves. We should reflect upon ways to ensure our uses of AI do not lead to an atrophy of the mind. Continually asking ourselves, with each use, what we might do to prevent the weakening of our mind is arguably critical if we are to preserve and strengthen our capacity for reason. If someone is not willing to ask such questions with each and every use, the person may ultimately be better off not employing AI technologies at all. We are worse off as human persons if such uses lead to weaker minds. Such considerations are also important in thinking about education. Students making use of LLMs and other AI technologies to complete homework assignments are less likely to learn the skills or information the assignment is designed to reinforce. Students who routinely use such technologies to generate essays will likely not be able to write as well themselves. As noted above, there may well be beneficial uses of AI technologies in education. However, we should continually evaluate whether such technologies are fostering, or diminishing, our capacity for reasoning and communication. These are distinctively human capacities that enable and constitute our flourishing. We must not give them up and we must ensure our educational systems, and our individual practices, help foster these capacities. Viewed positively, the question of the appropriate use and limits of AI technologies in education might provide greater discussion of, and clarity around, the ends of education in human formation and human flourishing [45].
Likewise, our creative capacities are part of what makes us human and what allows us to flourish. Attempting to draw, or write a poem, or compose or perform a musical piece, is very different from commanding an AI technology to produce the same. Even if we think the AI product looks or sounds better, we have left behind the actual carrying out of the creative work that can itself be so satisfying. Our very attempts at such creative activity, even if sometimes unsuccessful or frustrating, can help develop patience and fortitude, can over time allow us to develop skills, can better enable our creative capacities, and ultimately allow us to better attain our human potential. Again, the use of our creative capacities simply is part of what make us human and part of what not only advances, but constitutes, our flourishing. We should embrace our own efforts—even amateur efforts—in this regard. We should not allow the seemingly “more professional” outputs of these technologies to deter us from our own creative pursuits. We should appreciate and value the distinctively human aspects of our creative endeavors and their products, along with the social fabric within which these are embedded. Just as one might well prefer a gift thoughtfully chosen by one’s spouse to something automatically selected and generated by an AI system without the spouse’s knowledge, so too we should appreciate the distinctively human elements of creative activity and products as a gift to the human community. If we neglect these matters, we will not flourish as well, individually or collectively.
As discussed above, we also cannot flourish if we attempt to outsource relationships to technologies. Our collective experience of the consequences of social media use should have already made this clear [46], but AI technologies arguably pose a yet greater threat with their capacity to partially simulate human conversations and interactions cf. [47,48,49,50,51,52,53,54]. Social relationships—real social relationships, with real mutual affection and care—are a part of our flourishing. Perhaps especially on this point, we need vigilance. We need to ensure we are not replacing social relationships with chatbots. We need AI technologies that are constantly pointing us back to real relationships, not luring us away from them. We need to ensure that our engagement with technology is still allowing us time to invest in people and to invest in communities.
Ultimately, we cannot outsource our meaning and our joy in relationships, our freedom and our responsibility, our reasoning and our understanding, our appreciation of beauty and our capacity for awe and wonder, or, in total, our flourishing, to technology. These things are a part of what it means to be human. We must thus consistently come back to these questions of whether the technologies we are developing—and our use of them—are helping us to flourish as human persons, or are potentially putting at risk part of what it is that makes us human. While AI technologies may have many instrumental benefits, we must protect ourselves from uses that threaten what is intrinsic to our flourishing as human persons.

7. Conclusions

This paper has provided five sets of principled considerations relating AI technology, and its specific productizations, to human flourishing, in order to help guide the development, use, and potential restrictions of those technologies. The paper put forward a flourishing framework including domains of happiness, health, meaning, character, relationships, and financial security to help evaluate AI technologies, though certainly similar approaches could be employed with other flourishing conceptualizations as well. The flourishing domains also helped motivate sets of considerations around flourishing and AI. The considerations—the LLM responses, product design, user engagement, the effect AI technologies are having on human knowledge, and, lastly, on what it is to be human—we believe should be a critical part of the framing of debates around AI. We make no claim that these are exhaustive sets of considerations, but we believe each of them must be taken seriously if these technologies are to help promote, rather than impede, flourishing. We do not think this will be an easy task, but it is a necessary one.
We believe certain points and conclusions from our discussion above are actionable already. With regard to product developers: (i) designing LLMs to provide regular reminders that they are not human, and also separately that the user may want to consider alternative activities or in-person human interactions could help individuals make more prudential decisions about their own use. As we discuss further below, developers could also (ii) consider discontinuing and ceasing to promote all relational chatbots. Finally, developers we believe should (iii) spend the vast majority of their effort on developing products that seem to have unambiguously beneficial or neutral effects, and prioritize those uses over products that might be more mixed or detrimental, even if profitable. On the side of the individual user, we would suggest, from the discussion above: (i) a continual questioning, with each use, whether it is enhancing or inhibiting one’s capacities as a human person, (ii) a continual limiting of use to ensure sufficient time to be with other people and in communities in person, and a discontinuing of all use of relational chatbots, and (iii) a commitment to verifying knowledge claims of LLM systems (it does not take much effort to click on a website to begin to verify what is put forward, and while the extent of such engagement that is worthwhile will vary by context, a deeper commitment to do so will be essential if the web of human knowledge is not to be damaged). Some of these proposals, especially on the developer side, may involve challenges in implementation or have unanticipated consequences and further work would of course be needed to assess this.
Many open questions remain. We have put forward proposals in this paper that would be amenable to empirical study to rigorously evaluate, with observational data or randomized trials, both effect sizes and matters of feasibility. It is not clear what magnitude of effect the proposed LLM response reminders above might have in shaping use and how this might vary by context. We had also proposed that businesses might require users to demonstrate mastery before employing AI tools. The effects of such policies on longer-term productivity could be evaluated. As AI-guided mental health applications advance, these should be evaluated, ideally in randomized trials, and compared with, for example, eCBT systems. Again, perhaps more controversially, we believe individuals would ultimately benefit from the discontinuation of all relational chatbots. Empirical studies evaluating the effects of such relational chatbots, both on short-term loneliness, but also, critically, on longer-term loneliness and relationship formation would be valuable in providing important evidence for making such determinations. It is, however, more challenging to evaluate the effects of relational chatbots on the overall social fabric and the resulting spillover effects of such technologies on relationship opportunities that may arise even for those who choose to refrain from the technologies themselves.
Important questions also remain regarding how the considerations identified in this paper may vary across cultures and contexts. Much of human nature, and much of flourishing may be relatively universal [9] but there are certainly elements that are culturally specific. There is cross-cultural variability in AI risks, benefits, and access. Integrating global perspectives—especially from low-resource or non-Western contexts, and further empirical research on these matters would be valuable. However, the more this work and reflection can be carried out now in contexts where AI technologies are already being used or where use is rapidly increasing, the better prepared we will be to address other contexts as well.
AI technologies hold tremendous potential. However, if the benefits are mainly instrumental, but the threats intrinsic, to what constitutes human flourishing, then we need to be constantly exercising discernment and prudential judgement. We need to frame our thinking on AI technologies around flourishing, and come back to these questions again and again, if we want to ensure a truly flourishing society.

Author Contributions

Conceptualization, T.J.V. and J.D.T.; writing—original draft preparation, T.J.V.; writing—review and editing, J.D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable.

Conflicts of Interest

Tyler VanderWeele reports consulting fees from Gloo Inc., along with shared revenue received by Harvard University in its license agreement with Gloo according to the University IP policy. Jonathan Teubner is a co-founder, director, and shareholder of Filter Labs, Inc.

References

  1. Kissinger, H.; Schmidt, E.; Huttenlocher, D.P. The Age of AI: And Our Human Future; Little, Brown and Company: New York, NY, USA, 2021. [Google Scholar]
  2. Acemoglu, D.; Autor, D.; Johnson, S. Can We Have Pro-Worker AI? Choosing a Path of Machines in Service of Minds; Policy Memo; MIT Shaping the Future of Work Initiative: Cambridge, MA, USA, 2023. [Google Scholar]
  3. Hunter, L.; Albert, C.; Henningan, C.; Rutland, J. The Military Application of Artificial Intelligence Technology in the United States, China, and Russia and the Implications for Global Security. Def. Secur. Anal. 2023, 39, 406–423. [Google Scholar] [CrossRef]
  4. Babina, T.; Fedyk, A.; He, A.; Hodson, J. Artificial Intelligence, Firm Growth, and Product Innovation. J. Financ. Econ. 2024, 151, 103745. [Google Scholar] [CrossRef]
  5. Yu, Y.; Wang, J.; Liu, Y.; Duan, H.; Li, M. Revisit the Environmental Impact of Artificial Intelligence: The Overlooked Carbon Emission Source? Front. Environ. Sci. Eng. 2024, 18, 158. [Google Scholar] [CrossRef]
  6. Dicasteries for the Doctrine of the Faith and for Culture. Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence. Available online: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html (accessed on 14 December 2025).
  7. VanderWeele, T.J. On the promotion of human flourishing. Proc. Natl. Acad. Sci. USA 2017, 114, 8148–8156. [Google Scholar] [CrossRef]
  8. VanderWeele, T.J.; Lomas, T. Terminology and the Well-being Literature. Affect. Sci. 2023, 4, 36–40. [Google Scholar] [CrossRef] [PubMed]
  9. VanderWeele, T.J.; Johnson, B.R.; Bialowolski, P.T.; Bonhag, R.; Bradshaw, M.; Breedlove, T.; Case, B.; Chen, Y.; Chen, Z.J.; Counted, V.; et al. The Global Flourishing Study: Study profile and initial results on flourishing. Nat. Ment. Health 2025, 3, 636–653. [Google Scholar] [CrossRef] [PubMed]
  10. Ryff, C.D.; Keyes, C.L.M. The structure of psychological well-being revisited. J. Personal. Soc. Psychol. 1995, 69, 719. [Google Scholar] [CrossRef]
  11. Keyes, C.L. The mental health continuum: From languishing to flourishing in life. J. Health Soc. Behav. 2002, 207–222. [Google Scholar] [CrossRef]
  12. Seligman, M.E. Flourish: A Visionary New Understanding of Happiness and Well-Being; Simon and Schuster: New York, NY, USA, 2011. [Google Scholar]
  13. Huppert, F.A.; So, T.T. Flourishing across Europe: Application of a new conceptual framework for defining well-being. Soc. Indic. Res. 2013, 110, 837–861. [Google Scholar] [CrossRef] [PubMed]
  14. Su, R.; Tay, L.; Diener, E. The development and validation of the comprehensive inventory of thriving (CIT) and the brief inventory of thriving (BIT). Appl. Psychol. Health Well-Being 2014, 6, 251–279. [Google Scholar] [CrossRef]
  15. Jones, M.L. How AI Became Anti-Family. The Dispatch. Available online: https://thedispatch.com/article/how-ai-became-anti-family/ (accessed on 19 September 2025).
  16. Toscano, M.; Burchfiel, K. Americans want A.I. safeguards by a 9-to-1 margin. Institute for Family Studies. Available online: https://ifstudies.org/blog/americans-want-ai-safeguards-by-a-9-to-1-margin (accessed on 16 September 2025).
  17. Hilliard, E.; Jagadeesh, A.; Cook, A.; Billings, S.; Skytland, N.; Llewellyn, A.; Paull, J.; Paull, N.; KuryloƗ, N.; Nesbitt, K.; et al. Measuring AI alignment with human flourishing. arXiv 2025, arXiv:2507.07787. [Google Scholar] [CrossRef]
  18. Willoughby, B.; Carroll, J. Counterfeit connections: The rise of AI romantic companions. BYU Institute of Family Studies Blog post. Available online: https://ifstudies.org/blog/counterfeit-connections-the-rise-of-ai-romantic-companions- (accessed on 13 February 2025).
  19. Robb, M.; Mann, S. Talk, trust, and trade-offs: How and why teens use AI companions. Common Sense Media. Available online: https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf (accessed on 14 December 2025).
  20. Perez, S. 72% of US teens have used AI companions, study finds. TechCrunch. Available online: https://techcrunch.com/2025/07/21/72-of-u-s-teens-have-used-ai-companions-study-finds/ (accessed on 21 July 2025).
  21. Fang, C.; Liu, A.; Danry, V.; Lee, E.; Chan, S.; Pataranutaporn, P.; Maes, P.; Phang, J.; Lampe, M.; Ahmad, L.; et al. How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal controlled study. arXiv 2025, arXiv:2503.17473. [Google Scholar] [CrossRef]
  22. Alabed, A.; Javornik, A.; Gregory-Smith, D.; Casey, R. More than just a chat: A taxonomy of consumers’ relationships with conversational AI agents and their well-being implications. Eur. J. Mark. 2024, 58, 373–409. [Google Scholar] [CrossRef]
  23. Ciriello, R.F.; Hannon, O.; Chen, A.Y.; Vaast, E. Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika. In Proceedings of the Hawaii International Conference on System Sciences 2024, Honolulu, HI, USA, 3–6 January 2024; pp. 488–497. [Google Scholar]
  24. Li, H.; Zhang, R. Finding love in algorithms: Deciphering the emotional contexts of close encounters with AI chatbots. J. Comput. Mediat. Commun. 2024, 29, zmae015. [Google Scholar] [CrossRef]
  25. Andersson, M. Companionship in code: AI’s role in the future of human connection. Humanit. Soc. Sci. Commun. 2025, 12, 1177. [Google Scholar] [CrossRef]
  26. Lott, M.; Hasselberger, W. With friends like these: Love and friendship with AI agents. Topoi 2025. [Google Scholar] [CrossRef]
  27. Holt-Lunstad, J.; Smith, T.B.; Baker, M.; Harris, T.; Stephenson, D. Loneliness and social isolation as risk factors for mortality: A meta-analytic review. Perspect. Psychol. Sci. 2015, 10, 227–237. [Google Scholar] [CrossRef]
  28. Hong, J.H.; Berkman, L.F.; Chen, F.S.; Shiba, K.; Chen, Y.; Kim, E.S.; VanderWeele, T.J. Are loneliness and social isolation equal threats to health and well-being? An outcome-wide longitudinal approach. Soc. Sci. Med.-Popul. Health 2023, 23, 101459. [Google Scholar] [CrossRef]
  29. Hadri, S.A.; Bouramoul, A. Towards a deep learning based contextual chat bot for preventing depression in young children with autistic spectrum disorder. Smart Health 2023, 27, 100371. [Google Scholar] [CrossRef]
  30. McNeilly, M. AI Meets Civil Discourse. Available online: https://jamesgmartin.center/2025/02/ai-meets-civil-discourse/ (accessed on 3 February 2025).
  31. Etzelmueller, A.; Vis, C.; Karyotaki, E.; Baumeister, H.; Titov, N.; Berking, M.; Cuijpers, P.; Riper, H.; Ebert, D.D. Effects of internet-based cognitive behavioral therapy in routine care for adults in treatment for depression and anxiety: Systematic review and meta-analysis. J. Med. Internet Res. 2020, 22, e18100. [Google Scholar] [CrossRef] [PubMed]
  32. Maritain, J. Education at the Crossroads; Yale University Press: New Haven, CT, USA, 1943. [Google Scholar]
  33. Acemogul, D.; Johnson, S. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity; Basic Books: London, UK, 2023. [Google Scholar]
  34. Capraro, V.; Globig, L.; Rausch, Z.; Rathje, S.; Wormley, A.; Olson, J.; Ross, R.; Aşçı, S.; Bouguettaya, A.; Burnell, K.; et al. A collective review on some potential negative impacts of smartphone and social media use on adolescent mental health: Results from a Delphi process. SSRN 2025, MKG-2025-1567. [Google Scholar] [CrossRef]
  35. Aldahoul, N.; Ibrahim, H.; Varvello, M.; Kaufman, A.; Rahwan, T.; Zaki, Y. Large language models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts. arXiv 2025, arXiv:2505.04171. [Google Scholar] [CrossRef]
  36. Zhang, W.; Zhang, J. Hallucination mitigation for retrieval-augmented large language models: A review. Mathematics 2025, 13, 856. [Google Scholar] [CrossRef]
  37. AboulEla, S.; Zabihitari, P.; Ibrahim, N.; Afshar, M.; Kashef, R. Exploring RAG solutions to reduce hallucinations in LLMs. In Proceedings of the 2025 IEEE International systems Conference (SysCon), Montreal, QC, Canada, 7–10 April 2025; pp. 1–8. [Google Scholar] [CrossRef]
  38. Su, J.; Zhou, J.P.; Zhang, Z.; Nakov, P.; Cardie, C. Towards more robust retrieval-augmented generation: Evaluating RAG under adversarial poisoning attacks. arXiv 2024, arXiv:2412.16708. [Google Scholar] [CrossRef]
  39. Sun, Z.; Zang, X.; Zheng, K.; Song, Y.; Xu, J.; Zhang, X.; Yu, Y.; Ma, J.; Mei, Q.; Li, H. ReDeEP: Detecting hallucination in retrieval-augmented generation via mechanistic interpretability. arXiv 2024, arXiv:2410.11414. [Google Scholar]
  40. Hughes, S.; Bae, M.; Li, M. Vectara Hallucination Leaderboard [Dataset]; Vectara. Available online: https://github.com/vectara/hallucination-leaderboard (accessed on 14 December 2025).
  41. Xu, Z.; Jain, S.; Kankanhalli, M. Hallucination is inevitable: An innate limitation of large language models. arXiv 2024, arXiv:2401.11817. [Google Scholar] [CrossRef]
  42. Banerjee, S.; Agarwal, A.; Singla, S. LLMs will always hallucinate, and we need to live with this. arXiv 2024, arXiv:2409.05746. [Google Scholar] [CrossRef]
  43. Taylor, C. The Language Animal: The Full Shape of the Human Linguistic Capacity; Harvard University Press: Cambridge, MA, USA, 2016. [Google Scholar]
  44. Kosmyna, N.; Hauptmann, E.; Yuan, Y.T.; Situ, J.; Liao, X.H.; Beresnitzky, A.V.; Braunstein, I.; Maes, P. Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv 2025, arXiv:2506.08872. [Google Scholar] [CrossRef]
  45. Kristjánsson, K.; VanderWeele, T.J. The proper scope of education for flourishing. J. Philos. Educ. 2025, 59, 634–650. [Google Scholar] [CrossRef]
  46. Haidt, J. The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness; Penguin Press: New York, NY, USA, 2024. [Google Scholar]
  47. Brandtzaeg, P.; Skjuve, M.; Følstad, A. My AI friend: How users of a social chatbot understand their human–AI friendship. Hum. Commun. Res. 2022, 48, 404–429. [Google Scholar] [CrossRef]
  48. Pentina, I.; Hancock, T.; Xie, T. Exploring relationship development with social chatbots: A mixed-method study of Replika. Comput. Hum. Behav. 2023, 140, 107600. [Google Scholar] [CrossRef]
  49. Gillath, O.; Abumusab, S.; Ai, T.; Branicky, M.; Davison, R.; Rulo, M.; Symons, J.; Thomas, G. How deep is AI’s love? Understanding relational AI. Behav. Brain Sci. 2023, 46, e33. [Google Scholar] [CrossRef]
  50. Jecker, N.; Sparrow, R.; Lederman, Z.; Ho, A. Digital humans to combat loneliness and social isolation: Ethics concerns and policy recommendations. Hastings Cent. Rep. 2024, 54, 7–12. [Google Scholar] [CrossRef] [PubMed]
  51. Shevlin, H. All too human? Identifying and mitigating ethical risks of social AI. Law. Ethics Technol. 2024, 2, 0003. [Google Scholar] [CrossRef]
  52. Marriott, H.; Pitardi, V. One is the loneliest number… Two can be as bad as one. The influence of AI friendship apps on users’ well-being and addiction. Psychol. Mark. 2024, 41, 86–101. [Google Scholar] [CrossRef]
  53. Crawford, J.; Allen, K.; Pani, B.; Cowling, M. When artificial intelligence substitutes humans in higher education: The cost of loneliness, student success, and retention. Stud. High. Educ. 2024, 49, 883–897. [Google Scholar] [CrossRef]
  54. Turkle, S. Reclaiming Conversation: The Power of Talk in a Digital Age, 10th Anniversary ed.; Penguin: New York, NY, USA, 2016. [Google Scholar]
Table 1. Flourishing measure and questions.
Table 1. Flourishing measure and questions.
DomainQuestion/Statement
D1. Happiness Q1. Overall, how satisfied are you with life as a whole these days?
D1. Happiness Q2. In general, how happy or unhappy do you usually feel?
D2. HealthQ3. In general, how would you rate your physical health?
D2. HealthQ4. How would you rate your overall mental health?
D3. MeaningQ5. Overall, to what extent do you feel the things you do in your life are worthwhile?
D3. Meaning Q6. I understand my purpose in life
D4. CharacterQ7. I always act to promote good in all circumstances, even in difficult and challenging situations
D4. CharacterQ8. I am always able to give up some happiness now for greater happiness later
D5. RelationshipsQ9. I am content with my friendships and relationships
D5. RelationshipsQ10. My relationships are as satisfying as I would want them to be
D6. Financial StabilityQ11. How often do you worry about being able to meet normal
monthly living expenses?
D6. Financial StabilityQ12. How often do you worry about safety, food, or housing?
Each question or statement is evaluated 0–10 cf. [7]. Anchors are: Q1 (0 = Not satisfied at all, 10 = Completely satisfied); Q2 (0 = Extremely unhappy, 10 = Extremely happy); Q3 and Q4 (0 = Poor, 10 = Excellent); Q5 (0 = Not at all worthwhile, 10 = Completely worthwhile); Q6, Q9, and Q10 (0 = Strongly disagree, 10 = Strongly agree); Q7 and Q8 (0 = Not true of me, 10 = Completely true of me); Q11 and Q12 (0 = Worry all the time, 10 = Do not ever worry).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

VanderWeele, T.J.; Teubner, J.D. Flourishing Considerations for AI. Information 2026, 17, 88. https://doi.org/10.3390/info17010088

AMA Style

VanderWeele TJ, Teubner JD. Flourishing Considerations for AI. Information. 2026; 17(1):88. https://doi.org/10.3390/info17010088

Chicago/Turabian Style

VanderWeele, Tyler J., and Jonathan D. Teubner. 2026. "Flourishing Considerations for AI" Information 17, no. 1: 88. https://doi.org/10.3390/info17010088

APA Style

VanderWeele, T. J., & Teubner, J. D. (2026). Flourishing Considerations for AI. Information, 17(1), 88. https://doi.org/10.3390/info17010088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop