Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions
Abstract
:1. Introduction: The Rise of AI in Health and Mental Well-Being
1.1. Background: AI’s Ascendancy and Constraints in Healthcare
1.2. Defining “Non-Neurotypical”
1.3. Enactivism as a Potentially Transformative Lens
2. AI Uses in Non-Neurotypical Healthcare
2.1. Overview of Existing Applications
- Socially Assistive Robots: Platforms such as NAO or Pepper have been tested in assisting autistic users with social cue recognition, emotion expression, and routine-building [15,16]. These robotic systems can engage users in structured tasks or play-based interactions, helping them practice social routines in a predictable, less anxiety-inducing environment.
- Virtual Reality (VR) Therapy: Immersive environments allow users to practice public speaking, job interviews, or everyday social interactions, adjusting complexity in real time [17,18]. For example, VR scenarios could simulate an office environment for individuals with autism to rehearse the dynamics of job interviews in a controlled setting [19].
- Language Processing Apps: AI-based natural language tools help users interpret figurative speech, expand vocabulary, or practice conversation scripts, providing nuanced feedback [20].These apps often include text-to-speech features, predictive suggestions, or word-substitution capabilities tailored to user profiles.
- Emotion Recognition Systems: Machine learning algorithms can detect affective states via facial expressions or voice intonations, potentially training individuals to interpret social cues [21].A cautionary note here is that some scholars (e.g., [22]) question whether facial expressions truly map onto universal emotional states, highlighting the need to consider users’ cultural and individual contexts.
- Personalized Treatment Planning: Data mining and analytics can reveal patterns in symptom emergence or therapy efficacy, allowing healthcare professionals to tailor interventions [23]. Detailed real-time data—such as user schedules, stress levels, or social interactions—can be used to build custom care pathways.
2.2. Opportunities and Ethical Pitfalls
3. Enactivism: Core Principles and Compatibility with AI
3.1. Revisiting the Enactive Paradigm
- (1)
- Embodied Cognition: Cognitive processes are tightly interwoven with bodily actions and sensorimotor capacities [26].
- (2)
- Context-Sensitivity: Action and perception cannot be fully separated from the environment in which they unfold [27].
- (3)
- Participatory Sense-Making: Social understanding and interaction are co-constructed, emphasizing reciprocal adjustments and shared meaning [28]. This perspective has gained traction in fields such as social cognition, philosophy of mind, and even psychiatry [29,30]. For AI design, an enactive approach warns against purely data-driven solutions that ignore bodily signals, cultural context, and real-time interpersonal feedback.
3.2. Interaction Theory vs. Predictive Processing Debates
3.3. Neurodiversity Perspectives
4. Enactivism’s Contribution to AI Design for Non-Neurotypical Users
4.1. Embodied and Context-Sensitive Interventions
- Robotic Tutors: Socially assistive robots can detect subtle nonverbal cues—such as restlessness or shifts in vocal pitch—and adjust session pacing or content in real time [16]. This synergy with a user’s sensorimotor engagement avoids imposing a single, uniform script.
- VR Therapy: Virtual environments might monitor physiological signals (heart rate, galvanic skin response) to gauge overload or anxiety, automatically scaling down complexity or prompting breaks [17].
4.2. Social Interdependence and Relationality
- Shared Dashboards: A parent and child jointly track daily emotional patterns or therapy progress, collaboratively adjusting strategies.
- Group VR Scenarios: Multi-user, immersive environments practice cooperative tasks, mirroring real-life demands rather than isolating the user with a non-human agent.
4.3. Agency and Autonomy Enhancement
- Offering adjustable difficulty levels or “comfort zones”, letting the user indicate readiness for more challenging tasks.
- Allowing user-led modifications, e.g., toggling certain stimuli on/off, selecting communication modes (visual aids, typed text, spoken dialogue).
- Prompting the user for reflections on how they experienced a particular scenario, respecting the role of first-person insights.
4.4. Addressing Cultural and Environmental Contexts
- Localize VR scenarios to reflect a user’s everyday tasks and cultural norms.
- Integrate user-chosen imagery, language, or role-players (e.g., siblings, mentors) into digital interactions.
- Adapt session times to real schedules and personal circadian rhythms.
4.5. Participatory Sense-Making Tools
- Interactive Goal-Setting: At therapy onset, the system can prompt users and caregivers to define shared goals or daily obstacles.
- Collaborative Reflection: After each session, the user or support network can annotate results—e.g., “User felt anxious due to bright lighting in VR scene”—which shapes the subsequent session.
5. Ethical and Practical Implications
5.1. Algorithmic Bias and Data Fairness
5.2. Privacy and Informed Consent
5.3. Autonomy vs. Automation
5.4. Human Contact, Empathy, and Social Withdrawal
6. Toward a More Holistic (Inter-)Personalized Approach
6.1. Expanding “Personalized” to “Inter-Personalized”
6.2. Addressing the “Double Empathy” Challenge
7. Counterarguments and Refinements
7.1. “Too Broad or Philosophical?”
7.2. Correcting Overly Sweeping Ethical Claims
7.3. Integrating Predictive Processing Without Negating Embodiment
7.4. Neurobiological Considerations
8. Guidelines for Enactive AI Interventions
- (1)
- User-Centered Co-Design
- ○
- Involve non-neurotypical users, caregivers, and clinicians in iterative design, from conceptualization to deployment.
- ○
- Host “think-aloud” sessions to capture first-person experiences and identify points of friction or confusion.
- (2)
- Embodied Feedback and Adaptation
- ○
- Monitor real-time sensor data to gauge stress or overload, automatically adjusting the complexity of tasks in VR or robot-based interventions.
- ○
- Provide clear pause/break options whenever the user signals discomfort or disinterest, preserving autonomy.
- (3)
- Relational Integration
- ○
- Encourage collaborative reflection among users, family members, and therapists.
- ○
- Create multi-participant VR or robot-assisted scenarios that simulate the real social contexts of the individual (e.g., classroom, office, or family gathering).
- (4)
- Cultural and Environmental Fit
- ○
- Adapt language, cultural cues, and scenario content to the user’s local environment.
- ○
- Avoid universal “best practices” that ignore unique socioeconomic or familial constraints.
- (5)
- Bias Auditing and Ethical Oversight
- ○
- Regularly evaluate algorithms for systematic misclassification or disproportionate errors affecting specific subgroups (e.g., women and ethnic minorities).
- ○
- Maintain rigorous data-privacy safeguards and transparent consent protocols.
- (6)
- Outcome Measures Beyond Symptom Reduction
- ○
- Include metrics for user satisfaction, sense of agency, and social connections, reflecting an enactivist emphasis on relational flourishing.
- ○
- Incorporate first-person narratives and feedback loops into standard quantitative measures.
9. Future Directions and Research
9.1. Empirical Validation and Longitudinal Impact
9.2. Ambient Smart Environments
9.3. Bridging Predictive Mind and Enactive Mind
9.4. Cross-Cultural and Policy Considerations
10. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Academic Press: Cambridge, MA, USA, 2020; pp. 25–60. [Google Scholar]
- Martinez-Millana, A.; Berntsen, G.; Whitelaw, S. AI solutions in healthcare: A scoping review of systematic reviews. Int. J. Med. Inform. 2022, 161, 104738. [Google Scholar]
- Yin, J.; Qian, J.; Zhang, T.; Chen, T. Integration of AI in clinical practice: A review of systematic analyses. J. Med. Internet Res. 2021, 23, e27479. [Google Scholar] [CrossRef] [PubMed]
- Uusitalo, S.; Salmela, M.; Reijula, S. Ethical considerations of AI in mental health. Philos. Psychiatry Psychol. 2021, 28, 1–11. [Google Scholar]
- Graham, S.; Depp, C.; Lee, E.E.; Nebeker, C.; Tu, X.; Kim, H.C.; Jeste, D.V. Artificial intelligence for mental health and mental illnesses: An overview. Curr. Psychiatry Rep. 2019, 21, 116. [Google Scholar] [CrossRef]
- D’Alfonso, S. AI in mental health. Curr. Opin. Psychol. 2020, 36, 112–117. [Google Scholar] [CrossRef]
- Jain, R. Chatbots in mental healthcare: Emerging roles and ethical considerations. J. Med. Internet Res. 2023, 25, e37654. [Google Scholar]
- Pellicano, E.; Stears, M. Bridging autism, science and society: Moving toward an ethically informed approach to autism research. Autism Res. 2011, 4, 271–282. [Google Scholar] [CrossRef]
- van Es, T.; Bervoets, J. Autism, predictability, and a future of interactive technologies. In Neurodiversity Studies: A New Critical Paradigm; Routledge: London, UK, 2022; pp. 67–91. [Google Scholar]
- Milton, D. On the ontological status of autism: The ‘double empathy problem’. Disabil. Soc. 2012, 27, 883–887. [Google Scholar] [CrossRef]
- Wu, F.; Lei, X.; Li, S. AI for personalized mental health interventions in neurodivergent populations. Artif. Intell. Med. 2022, 134, 102431. [Google Scholar]
- Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
- Anagnostopoulou, P.; Alexandropoulou, V.; Lorentzou, G.; Lykothanasi, A.; Ntaountaki, P.; Drigas, A. Artificial intelligence in autism assessment. Int. J. Emerg. Technol. Learn. 2020, 15, 95–107. [Google Scholar] [CrossRef]
- Jaliaawala, M.S.; Khan, M.A. Applications of artificial intelligence (AI) in clinical psychology. Int. J. Acad. Med. 2020, 6, 72–75. [Google Scholar]
- Alcorn, A.M.; Ainger, E.; Charisi, V.; Mantinioti, S.; Petrović, S.; Schadenberg, B.R.; Pellicano, E. Educators’ views on using humanoid robots with autistic learners in special education settings in England. Front. Robot. AI 2019, 6, 107. [Google Scholar] [CrossRef]
- Martínez-Martin, E.; del Pobil, A.P.; Berry, D. How to measure interactions between robots and children with autism spectrum disorder: A concept review. Int. J. Soc. Robot. 2020, 12, 1129–1156. [Google Scholar]
- Moon, S. Virtual reality applications for mental health: Challenges and opportunities. Cyberpsychology Behav. Soc. Netw. 2018, 21, 37–42. [Google Scholar]
- Bravou, V.; Oikonomidou, D.; Drigas, A.S. Applications of virtual reality for autism inclusion. A review. Retos 2022, 45, 779–785. [Google Scholar] [CrossRef]
- Zhang, M.; Ding, H.; Naumceska, M.; Zhang, Y. Virtual reality technology as an educational and intervention tool for children with autism spectrum disorder: Current perspectives and future directions. Behave. Sci. 2022, 12, 138. [Google Scholar] [CrossRef]
- Bone, D.; Goodwin, M.S.; Black, M.P.; Lee, C.C.; Audhkhasi, K.; Narayanan, S. Applying machine learning to facilitate autism diagnostics: Pitfalls and promises. J. Autism Dev. Disord. 2015, 45, 1121–1136. [Google Scholar] [CrossRef]
- Pioggia, G.; Tartarisco, G.; Corda, D. Real-world opportunities for autism interventions via wearable sensors and emotion recognition systems. Appl. Intell. 2005, 23, 129–141. [Google Scholar]
- Barrett, L.F. How Emotions Are Made: The Secret Life of the Brain; Houghton Mifflin Harcourt: Boston, MA, USA, 2017. [Google Scholar]
- Bölte, S.; Golan, O.; Goodwin, M.S.; Zwaigenbaum, L. What can innovative technologies do for autism spectrum disorders? Autism 2010, 14, 155–159. [Google Scholar] [CrossRef]
- Larrazabal, A.J.; Nieto, N.; Peterson, V.; Milone, D.H.; Ferrante, E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl. Acad. Sci. USA 2020, 117, 12592–12594. [Google Scholar] [CrossRef]
- Abd-Alrazaq, A.A.; Alajlani, M.; Alalwan, A.A.; Bewick, B.M.; Gardner, P.; Househ, M. An overview of the features of chatbots in mental health: A scoping review. Int. J. Med. Inform. 2019, 132, 103978. [Google Scholar] [CrossRef]
- Gallagher, S. How the Body Shapes the Mind; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
- O’Regan, J.K.; Noë, A. A sensorimotor account of vision and visual consciousness. Behav. Brain Sci. 2001, 24, 883–917. [Google Scholar] [CrossRef]
- De Jaegher, H.; Di Paolo, E. Participatory sense-making: An enactive approach to social cognition. Phenomenol. Cogn. Sci. 2007, 6, 485–507. [Google Scholar] [CrossRef]
- de Haan, S. Enactive Psychiatry; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
- Maiese, M. Autonomy, Enactivism, and Mental Disorder; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
- Gallagher, S. Understanding interpersonal problems in autism: Interaction theory as an alternative to theory of mind. Philos. Psychiatry Psychol. 2004, 11, 199–217. [Google Scholar] [CrossRef]
- Clark, A. Surfing Uncertainty: Prediction, Action, and the Embodied Mind; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
- Constant, A.; Bervoets, J.; Hens, K.; Van de Cruys, S. Precise worlds for certain minds: An ecological perspective on the relational self in autism. Topoi 2021, 40, 921–934. [Google Scholar] [CrossRef]
- Di Paolo, E.A.; De Jaegher, H. Enacting becoming: Beyond autonomy and heteronomy. Philos. Today 2022, 66, 403–430. [Google Scholar]
- Vermeulen, P. Autism and the predictive mind: A mismatch? Autism 2022, 26, 1271–1274. [Google Scholar]
- Hutto, D.D.; Jurgens, A. Taking emerging tropes in the neurodiversity debate seriously: The challenge of neurodiverse empathy. Metaphilosophy 2018, 49, 58–76. [Google Scholar]
- Birhane, A. Algorithmic injustice: A relational ethics approach. Patterns 2021, 2, 100205. [Google Scholar] [CrossRef]
- Chapman, R. The reality of autism: On the metaphysics of disorder and diversity. Philosophies 2021, 6, 15. [Google Scholar] [CrossRef]
- Galbusera, L.; Kyselo, M. The intersubjective endeavor of psychopathology: Phenomenology and enactivism. Philos. Psychiatry Psychol. 2019, 26, 237–255. [Google Scholar]
- De Jaegher, H. Rigid and fluid interactions with institutions. Phenomenol. Cogn. Sci. 2013, 12, 104–113. [Google Scholar] [CrossRef]
- Eigsti, I.M. A review of embodiment in autism spectrum disorders. Front. Psychol. 2013, 4, 224. [Google Scholar] [CrossRef] [PubMed]
- Geschwind, D.H. Genetics of autism spectrum disorders. Trends Cogn. Sci. 2015, 19, 408–416. [Google Scholar] [CrossRef]
- Safron, A.; Fauvel, T.; Park, H. Ambient intelligence in mental health care. Front. Comput. Sci. 2023, 5, 1042731. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vallverdú, J. Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions. Philosophies 2025, 10, 51. https://doi.org/10.3390/philosophies10030051
Vallverdú J. Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions. Philosophies. 2025; 10(3):51. https://doi.org/10.3390/philosophies10030051
Chicago/Turabian StyleVallverdú, Jordi. 2025. "Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions" Philosophies 10, no. 3: 51. https://doi.org/10.3390/philosophies10030051
APA StyleVallverdú, J. (2025). Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions. Philosophies, 10(3), 51. https://doi.org/10.3390/philosophies10030051