Next Article in Journal
Phytometamorphosis: An Ontology of Becoming in Amazonian Women’s Poetry About Plants
Previous Article in Journal
“The Problem of Speech in Merleau-Ponty: My View of ‘Speaking Speech’ and ‘Spoken Speech’ in Light of Ontogenesis”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions

ICREA, Philosophy Department, Universitat Autònoma de Barcelona, 08193 Bellaterra (BCN), Spain
Philosophies 2025, 10(3), 51; https://doi.org/10.3390/philosophies10030051
Submission received: 27 March 2025 / Accepted: 16 April 2025 / Published: 28 April 2025

Abstract

:
The enactive approach offers a powerful theoretical lens for designing artificial intelligence (AI) systems intended to support the health and well-being of non-neurotypical individuals, including those on the autism spectrum and those with with ADHD, dyslexia, or other forms of neurodivergence. By emphasizing embodiment, relationality, and participatory sense-making, enactivism encourages AI-based interventions that are highly personalized, context-sensitive, and ethically aware. This paper explores how existing AI applications—ranging from socially assistive robots and virtual reality (VR) therapies to language-processing apps and personalized treatment planning—may be enhanced by incorporating enactivist principles. Despite their promise, practical adoption of AI technologies in real-world clinical practice remains limited, and persistent challenges such as algorithmic bias, privacy concerns, and the tendency to overlook subjective dimensions raise cautionary notes. Drawing on relevant philosophical literature, empirical studies, and cross-disciplinary debates (including the friction and potential synergies between predictive processing and enactivism), we argue that AI solutions grounded in enactivist thinking can more effectively honor user autonomy, acknowledge the embodied nature of neurodiverse cognition, and avoid reductive standardizations. This expanded, revised version integrates insights on neurodiversity, mental health paradigms, and the ethical imperatives of AI deployment, thereby offering a more comprehensive roadmap for researchers, clinicians, and system developers alike.

1. Introduction: The Rise of AI in Health and Mental Well-Being

1.1. Background: AI’s Ascendancy and Constraints in Healthcare

The integration of artificial intelligence (AI) technologies into healthcare has accelerated notably over the past two decades [1]. AI-driven innovations—including image recognition, data analytics, and predictive modeling—promise to transform clinical diagnostics and decision-making. Research shows that AI excels at processing extensive data repositories to detect subtle patterns that might elude human [2]. Yet, real-world deployment outside controlled research settings remains relatively modest [3]. Barriers include ata-privacy regulations, clinical skepticism, mismatched workflows, and a lingering question about how to preserve humanistic, empathic dimensions of care in automated systems [4].
In mental health contexts especially, AI’s strengths in data processing must be balanced with recognition of subjective factors such as emotional rapport, stigma, and cultural differences [5]. For instance, automated detection of depressive symptoms by text analysis may raise concerns about overreach, de-contextualized judgments, or privacy violations [6]. Interactions with chatbots can exacerbate social isolation if not integrated into broader therapy networks [7]. Meanwhile, AI solutions can help clinicians avoid diagnostic blind spots by systematically evaluating symptom clusters. The question is how best to harness AI’s data-handling power without undermining interpersonal and contextual elements crucial to effective mental healthcare.

1.2. Defining “Non-Neurotypical”

This paper focuses particularly on non-neurotypical populations—a broad category encompassing individuals whose cognitive or neurological functioning diverges from societal norms. Examples include autism spectrum conditions, ADHD, dyslexia, and other forms of neurodevelopmental variation [8,9]. Under the neurodiversity paradigm, these differences are not inherently pathological but alternate modes of experiencing the world [10]. Even so, many health interventions remain standardized in ways that may fail to reflect the lived reality of non-neurotypical persons, who often have unique sensory, communicative, or social needs. AI-based technologies, if thoughtfully deployed, can offer personalized assistance that addresses specific challenges and capitalizes on individual strengths [11]. Yet naive or poorly designed systems risk reinforcing biases, ignoring embodied experiences, and overlooking the complexity of real interpersonal dynamics.

1.3. Enactivism as a Potentially Transformative Lens

Enactivism is a theoretical orientation proposing that cognition emerges from active, embodied engagement with the environment [12]. Contrasted with computational or purely representational theories of mind, enactivism emphasizes the reciprocal interplay between an organism and its surroundings, including the social sphere. For mental health interventions aimed at non-neurotypical populations, enactivism highlights subjective, contextual, and relational dimensions. This orientation resonates with the growing consensus that technologies should be user-centered and ethically aware, especially for vulnerable or marginalized groups. By merging enactivism with AI design, we can potentially ensure that digital tools respect embodied differences and foster participatory sense-making, rather than imposing standardized or disembodied protocols.

2. AI Uses in Non-Neurotypical Healthcare

2.1. Overview of Existing Applications

AI-driven solutions for non-neurotypical individuals span a wide spectrum of interventions, each with distinct rationales and varying success levels [13,14]. Illustrative examples include:
  • Socially Assistive Robots: Platforms such as NAO or Pepper have been tested in assisting autistic users with social cue recognition, emotion expression, and routine-building [15,16]. These robotic systems can engage users in structured tasks or play-based interactions, helping them practice social routines in a predictable, less anxiety-inducing environment.
  • Virtual Reality (VR) Therapy: Immersive environments allow users to practice public speaking, job interviews, or everyday social interactions, adjusting complexity in real time [17,18]. For example, VR scenarios could simulate an office environment for individuals with autism to rehearse the dynamics of job interviews in a controlled setting [19].
  • Language Processing Apps: AI-based natural language tools help users interpret figurative speech, expand vocabulary, or practice conversation scripts, providing nuanced feedback [20].These apps often include text-to-speech features, predictive suggestions, or word-substitution capabilities tailored to user profiles.
  • Emotion Recognition Systems: Machine learning algorithms can detect affective states via facial expressions or voice intonations, potentially training individuals to interpret social cues [21].A cautionary note here is that some scholars (e.g., [22]) question whether facial expressions truly map onto universal emotional states, highlighting the need to consider users’ cultural and individual contexts.
  • Personalized Treatment Planning: Data mining and analytics can reveal patterns in symptom emergence or therapy efficacy, allowing healthcare professionals to tailor interventions [23]. Detailed real-time data—such as user schedules, stress levels, or social interactions—can be used to build custom care pathways.
While these solutions are promising—offering remote access, reduced stigma, or continuous feedback—they face persistent skepticism about real clinical impact, particularly once their novelty wears off [2]. Further, many remain confined to experimental or pilot phases, lacking large-scale longitudinal studies demonstrating efficacy.

2.2. Opportunities and Ethical Pitfalls

AI’s capacity for large-scale data processing means it can personalize interventions more than typical “one-size-fits-all” approaches. However, data-driven methods risk algorithmic bias if training sets are limited or skewed [24]. Moreover, continuous data collection raises privacy concerns about how sensitive information is stored or used [25]. Another issue is overreliance on technology that might reduce face-to-face human contact, potentially detrimental for individuals who rely on interpersonal warmth and direct empathy [6].
As a result, ethical design must ensure that such AI systems remain accessible, protect data rigorously, minimize bias, and complement rather than replace the empathic and relational aspects of healthcare.

3. Enactivism: Core Principles and Compatibility with AI

3.1. Revisiting the Enactive Paradigm

Enactivism rejects the notion of cognition as a purely internal or “brain-bound” computation. Instead, it foregrounds how the mind emerges in embodied, dynamic, and relational processes [12]. Key tenets include:
(1)
Embodied Cognition: Cognitive processes are tightly interwoven with bodily actions and sensorimotor capacities [26].
(2)
Context-Sensitivity: Action and perception cannot be fully separated from the environment in which they unfold [27].
(3)
Participatory Sense-Making: Social understanding and interaction are co-constructed, emphasizing reciprocal adjustments and shared meaning [28]. This perspective has gained traction in fields such as social cognition, philosophy of mind, and even psychiatry [29,30]. For AI design, an enactive approach warns against purely data-driven solutions that ignore bodily signals, cultural context, and real-time interpersonal feedback.

3.2. Interaction Theory vs. Predictive Processing Debates

Discussions around enactivism intersect with interaction theory [31] and predictive processing frameworks [32]. Interaction theory underscores how humans directly perceive and respond to others’ intentions through nonverbal cues and reciprocal bodily engagements. In contrast, predictive processing posits the brain as a generative model aiming to minimize “prediction error”, thereby framing cognition in terms of top-down predictions and bottom-up error signals.
Some argue these approaches can be integrated—especially if predictive models account for real-time sensorimotor loops [33]. Others, ref. [34] highlight deep conceptual frictions about the embodied role of the environment that predictive processing may miss. For non-neurotypical individuals, adopting a purely predictive-brain stance risks pathologizing atypical predictions without acknowledging the unique embodied and cultural contexts in which such predictions arise [35]. Enactivism, by contrast, emphasizes that each user “enacts” the world differently, shaping and shaped by their lived environment.

3.3. Neurodiversity Perspectives

Enactivism aligns well with the neurodiversity movement, which views conditions such as autism or ADHD not simply as pathologies but as variations in cognition that can include strengths [10]. Because enactivism stresses active engagement and sense-making, it underscores that each individual enacts the world differently based on their unique sensorimotor and social interactions [36]. Therefore, an enactive approach to AI system design demands flexibility, respect for each user’s self-defined goals, and an appreciation for how environmental or cultural factors shape what “support” means in practice [9].

4. Enactivism’s Contribution to AI Design for Non-Neurotypical Users

4.1. Embodied and Context-Sensitive Interventions

Traditional AI implementations often rely on static user profiles or normative models of social behavior. An enactive approach, by contrast, encourages systems to adapt dynamically to a user’s embodied state. For example:
  • Robotic Tutors: Socially assistive robots can detect subtle nonverbal cues—such as restlessness or shifts in vocal pitch—and adjust session pacing or content in real time [16]. This synergy with a user’s sensorimotor engagement avoids imposing a single, uniform script.
  • VR Therapy: Virtual environments might monitor physiological signals (heart rate, galvanic skin response) to gauge overload or anxiety, automatically scaling down complexity or prompting breaks [17].
Such “embodied adaptation” underscores that each user’s experience must be addressed holistically, balancing user autonomy with supportive scaffolds.

4.2. Social Interdependence and Relationality

Enactivism stresses that cognition is relational, especially relevant in interventions for non-neurotypical individuals who often benefit from consistent familial, therapeutic, or peer support. AI-based solutions can foster relationships, for instance, through:
  • Shared Dashboards: A parent and child jointly track daily emotional patterns or therapy progress, collaboratively adjusting strategies.
  • Group VR Scenarios: Multi-user, immersive environments practice cooperative tasks, mirroring real-life demands rather than isolating the user with a non-human agent.
By weaving relational considerations into AI design, technology can strengthen social ties rather than inadvertently isolating the user or reinforcing a purely individualistic model of therapy.

4.3. Agency and Autonomy Enhancement

An enactive orientation spotlights user agency, crucial for non-neurotypical persons who may have experienced paternalistic or deficit-based systems. AI can bolster autonomy by:
  • Offering adjustable difficulty levels or “comfort zones”, letting the user indicate readiness for more challenging tasks.
  • Allowing user-led modifications, e.g., toggling certain stimuli on/off, selecting communication modes (visual aids, typed text, spoken dialogue).
  • Prompting the user for reflections on how they experienced a particular scenario, respecting the role of first-person insights.
By centering user control, enactively informed AI systems adhere to ethical principles of self-determination and help users build trust.

4.4. Addressing Cultural and Environmental Contexts

Cognition unfolds in specific cultural and environmental settings [26]. Non-neurotypical individuals are not monolithic; their experiences vary by family context, social stigma, resource availability, or cultural norms regarding help-seeking. Enactive AI design might:
  • Localize VR scenarios to reflect a user’s everyday tasks and cultural norms.
  • Integrate user-chosen imagery, language, or role-players (e.g., siblings, mentors) into digital interactions.
  • Adapt session times to real schedules and personal circadian rhythms.
Such “situated design” moves beyond standardized or “clinic-based” scripts and embeds therapy in the user’s actual lifeworld.

4.5. Participatory Sense-Making Tools

Given enactivism’s emphasis on participatory sense-making [28], AI can incorporate features enabling co-creation of meaning:
  • Interactive Goal-Setting: At therapy onset, the system can prompt users and caregivers to define shared goals or daily obstacles.
  • Collaborative Reflection: After each session, the user or support network can annotate results—e.g., “User felt anxious due to bright lighting in VR scene”—which shapes the subsequent session.
Rather than passively measuring outcomes, enactive AI fosters an active interpretive process, consistent with enactivist convictions about shared meaning-making.

5. Ethical and Practical Implications

5.1. Algorithmic Bias and Data Fairness

A chief concern is that AI may embed or amplify stereotypes in training data [37]. This risk is acute for autism research, which historically underserves certain populations (e.g., women and ethnic minorities [24]. An enactive lens, while not a technical fix, mandates iterative dialogues with user communities and bias audits throughout the development cycle [38]. Ethical oversight committees may require robust fairness checks, aligning with enactivism’s relational ethic that calls for transparency and inclusivity.

5.2. Privacy and Informed Consent

Many AI systems log daily personal data—e.g., emotional states, geolocation, or sensor readings. Ethical usage hinges on explicit informed consent, with the user (and caregivers, if relevant) controlling how much monitoring occurs [4]. Enactivism’s emphasis on embodied and relational autonomy guides system designers to ensure that data collection, storage, and usage remain co-created and situationally appropriate, rather than imposed.

5.3. Autonomy vs. Automation

In mental health, oversimplified “automation” can stifle the sense of being heard and respected. An enactive approach reframes AI as a collaborative agent that complements, rather than replaces, human support [6]. Automated screening might free clinicians’ time, facilitating deeper interpersonal dialogue. Or sensor-driven VR might let users practice coping strategies before discussing outcomes with a therapist. Such synergy harnesses technology’s strengths while preserving interpersonal connectedness—critical for mental health outcomes.

5.4. Human Contact, Empathy, and Social Withdrawal

A repeated concern is that chatbots or robotic systems could reduce direct human empathy. Recognizing cognition as relational, enactivism suggests that technology is best integrated with interpersonal therapy, not substituted for it [5]. Chatbots can triage inquiries or provide after-hours support, but human professionals remain essential for deeper empathic bonds and moral accountability. This balanced approach prevents social withdrawal through over-reliance on non-human interaction.

6. Toward a More Holistic (Inter-)Personalized Approach

6.1. Expanding “Personalized” to “Inter-Personalized”

Contemporary psychological treatments increasingly recognize an inter-personalized approach over a strictly personalized model. While personalization tailors interventions to individual characteristics (e.g., symptom profiles, genetic markers, or personal preferences), the inter-personalized paradigm also accounts for the broader social and interpersonal dynamics crucial to mental well-being. Family systems therapy, group interventions, and community-based support illustrate how mental health emerges from—and shapes—relationships and cultural contexts [39].
When AI is integrated into mental health resources for non-neurotypical users, adopting this inter-personalized lens aligns with enactivist tenets. For example, VR therapy scenarios may simulate everyday interactions with family, colleagues, or friends, letting the user practice real-life negotiations rather than purely “intra-psychic” tasks.

6.2. Addressing the “Double Empathy” Challenge

Neurotypical observers sometimes characterize autistic individuals as having impaired empathy. However, the double empathy problem [10] reorients this conversation: the social mismatch arises from both neurotypical and autistic individuals failing to read each other’s cues or interpret each other’s contexts. An enactive AI system, designed to adapt to each user’s mode of sense-making, can help mitigate such mismatches by bridging “interactional affordances” that otherwise remain overlooked.

7. Counterarguments and Refinements

7.1. “Too Broad or Philosophical?”

Some critics see enactivism as “overly philosophical” for direct clinical application. Yet numerous authors (e.g., [29,40]) demonstrate how enactive frameworks can inform tangible interventions in autism, mood disorders, and personality conditions. Thus, enactivism need not supplant medical or cognitive models but complement them, emphasizing user experiences, embodiment, and relationality.

7.2. Correcting Overly Sweeping Ethical Claims

A prior draft might have implied that any non-enactivist healthcare approach is “inherently unethical.” This final, expanded version clarifies that other paradigms can be ethically defensible and clinically beneficial. Nonetheless, an enactive perspective can strengthen ethical considerations—especially around user autonomy and collaborative meaning-making—by acknowledging that non-neurotypical experiences often deviate from norm-based assumptions embedded in AI systems.

7.3. Integrating Predictive Processing Without Negating Embodiment

Predictive processing can overshadow bodily or social processes by overemphasizing the brain’s generative models. Some research attempts to reconcile these frameworks, positing an “enactive predictive processing” that integrates sensorimotor loops [33]. The key is whether AI design remains mindful of user context or reduces therapy to data-driven predictions. Enactivism highlights that any predictive modeling must remain anchored in real bodily interactions and reciprocal social feedback.

7.4. Neurobiological Considerations

Critics of enactivism sometimes claim it downplays or oversimplifies the neural underpinnings of non-neurotypical conditions [41]). Enactivism does not reject the role of neural correlates but insists they be understood within broader organism–environment loops [42]. A purely neurobiological lens might miss crucial socio-environmental constraints or enabling factors. A synergy of enactivist and neurobiological insights—especially in domains such as autism research—can thus yield a more holistic account of cognitive differences.

8. Guidelines for Enactive AI Interventions

To operationalize enactivism in AI design, we propose the following guidelines, aligning with both philosophical insights and empirical constraints:
(1)
User-Centered Co-Design
Involve non-neurotypical users, caregivers, and clinicians in iterative design, from conceptualization to deployment.
Host “think-aloud” sessions to capture first-person experiences and identify points of friction or confusion.
(2)
Embodied Feedback and Adaptation
Monitor real-time sensor data to gauge stress or overload, automatically adjusting the complexity of tasks in VR or robot-based interventions.
Provide clear pause/break options whenever the user signals discomfort or disinterest, preserving autonomy.
(3)
Relational Integration
Encourage collaborative reflection among users, family members, and therapists.
Create multi-participant VR or robot-assisted scenarios that simulate the real social contexts of the individual (e.g., classroom, office, or family gathering).
(4)
Cultural and Environmental Fit
Adapt language, cultural cues, and scenario content to the user’s local environment.
Avoid universal “best practices” that ignore unique socioeconomic or familial constraints.
(5)
Bias Auditing and Ethical Oversight
Regularly evaluate algorithms for systematic misclassification or disproportionate errors affecting specific subgroups (e.g., women and ethnic minorities).
Maintain rigorous data-privacy safeguards and transparent consent protocols.
(6)
Outcome Measures Beyond Symptom Reduction
Include metrics for user satisfaction, sense of agency, and social connections, reflecting an enactivist emphasis on relational flourishing.
Incorporate first-person narratives and feedback loops into standard quantitative measures.

9. Future Directions and Research

9.1. Empirical Validation and Longitudinal Impact

Long-term outcome studies are needed to see whether enactively informed AI solutions truly embed themselves in daily life beyond initial novelty [2]. Mixed-method designs—integrating clinical scales with phenomenological interviews—can illuminate if and how these systems improve autonomy, well-being, or real-world social participation. Does VR-practiced empathy translate into better workplace integration? Do robotic tutors remain engaging over extended periods?

9.2. Ambient Smart Environments

Next-generation interventions might integrate ambient intelligence, embedding AI sensors throughout living spaces to support users. For instance, lighting or acoustic levels could adapt to reduce sensory overload for a user on the autism spectrum. Yet such systems risk intrusiveness unless carefully governed by enactivist principles that emphasize co-design, privacy, and user empowerment [43].

9.3. Bridging Predictive Mind and Enactive Mind

Further theoretical refinement could explore how an “enactive predictive processing” might unify sensorimotor contingencies with generative modeling [32,34]. For AI, this means designing algorithms that dynamically incorporate user feedback to update predictions—yet remain open-ended and contextually grounded, rather than imposing normative cognitive templates.

9.4. Cross-Cultural and Policy Considerations

Cultural differences in how autism, ADHD, or mental health are perceived must be factored in if AI solutions are to scale globally. Engaging local stakeholders, translating frameworks into local languages, and adapting design to cultural norms all reflect enactivism’s emphasis on situated cognition. On the policy side, regulatory frameworks must clarify liability, data ownership, and professional accountability—particularly for AI systems making mental health recommendations.

10. Conclusions

Enactivism’s focus on embodiment, participatory sense-making, and contextual meaning-making holds significant promise for guiding AI-based therapies for non-neurotypical individuals. By aligning with the neurodiversity viewpoint, an enactive lens encourages us to see differences in cognition not as deficits but as distinct ways of engaging the world. This perspective helps avoid over-standardization or unintentional bias within AI solutions.
Simultaneously, one must acknowledge AI’s limitations—algorithmic bias, limited clinical adoption, and the potential weakening of human relationships if technology is deployed insensitively. Integrating enactivist design principles can mitigate these pitfalls by emphasizing user autonomy, relational contexts, and iterative co-design. Far from excluding other paradigms, enactivism can complement established approaches, ensuring that mental health interventions fully incorporate social, bodily, and ethical dimensions.
Overall, an enactive orientation grounds AI deployment in a holistic, person-centered (and inter-personalized) ethos, reaffirming the relational essence of mental health support. By embracing these values, we can steer AI toward more inclusive, context-sensitive, and ethically responsible strategies for non-neurotypical users.

Funding

The research is supported by ICREA 2019, and PALGAIT (PID2023-148336NB-100) grants.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Academic Press: Cambridge, MA, USA, 2020; pp. 25–60. [Google Scholar]
  2. Martinez-Millana, A.; Berntsen, G.; Whitelaw, S. AI solutions in healthcare: A scoping review of systematic reviews. Int. J. Med. Inform. 2022, 161, 104738. [Google Scholar]
  3. Yin, J.; Qian, J.; Zhang, T.; Chen, T. Integration of AI in clinical practice: A review of systematic analyses. J. Med. Internet Res. 2021, 23, e27479. [Google Scholar] [CrossRef] [PubMed]
  4. Uusitalo, S.; Salmela, M.; Reijula, S. Ethical considerations of AI in mental health. Philos. Psychiatry Psychol. 2021, 28, 1–11. [Google Scholar]
  5. Graham, S.; Depp, C.; Lee, E.E.; Nebeker, C.; Tu, X.; Kim, H.C.; Jeste, D.V. Artificial intelligence for mental health and mental illnesses: An overview. Curr. Psychiatry Rep. 2019, 21, 116. [Google Scholar] [CrossRef]
  6. D’Alfonso, S. AI in mental health. Curr. Opin. Psychol. 2020, 36, 112–117. [Google Scholar] [CrossRef]
  7. Jain, R. Chatbots in mental healthcare: Emerging roles and ethical considerations. J. Med. Internet Res. 2023, 25, e37654. [Google Scholar]
  8. Pellicano, E.; Stears, M. Bridging autism, science and society: Moving toward an ethically informed approach to autism research. Autism Res. 2011, 4, 271–282. [Google Scholar] [CrossRef]
  9. van Es, T.; Bervoets, J. Autism, predictability, and a future of interactive technologies. In Neurodiversity Studies: A New Critical Paradigm; Routledge: London, UK, 2022; pp. 67–91. [Google Scholar]
  10. Milton, D. On the ontological status of autism: The ‘double empathy problem’. Disabil. Soc. 2012, 27, 883–887. [Google Scholar] [CrossRef]
  11. Wu, F.; Lei, X.; Li, S. AI for personalized mental health interventions in neurodivergent populations. Artif. Intell. Med. 2022, 134, 102431. [Google Scholar]
  12. Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  13. Anagnostopoulou, P.; Alexandropoulou, V.; Lorentzou, G.; Lykothanasi, A.; Ntaountaki, P.; Drigas, A. Artificial intelligence in autism assessment. Int. J. Emerg. Technol. Learn. 2020, 15, 95–107. [Google Scholar] [CrossRef]
  14. Jaliaawala, M.S.; Khan, M.A. Applications of artificial intelligence (AI) in clinical psychology. Int. J. Acad. Med. 2020, 6, 72–75. [Google Scholar]
  15. Alcorn, A.M.; Ainger, E.; Charisi, V.; Mantinioti, S.; Petrović, S.; Schadenberg, B.R.; Pellicano, E. Educators’ views on using humanoid robots with autistic learners in special education settings in England. Front. Robot. AI 2019, 6, 107. [Google Scholar] [CrossRef]
  16. Martínez-Martin, E.; del Pobil, A.P.; Berry, D. How to measure interactions between robots and children with autism spectrum disorder: A concept review. Int. J. Soc. Robot. 2020, 12, 1129–1156. [Google Scholar]
  17. Moon, S. Virtual reality applications for mental health: Challenges and opportunities. Cyberpsychology Behav. Soc. Netw. 2018, 21, 37–42. [Google Scholar]
  18. Bravou, V.; Oikonomidou, D.; Drigas, A.S. Applications of virtual reality for autism inclusion. A review. Retos 2022, 45, 779–785. [Google Scholar] [CrossRef]
  19. Zhang, M.; Ding, H.; Naumceska, M.; Zhang, Y. Virtual reality technology as an educational and intervention tool for children with autism spectrum disorder: Current perspectives and future directions. Behave. Sci. 2022, 12, 138. [Google Scholar] [CrossRef]
  20. Bone, D.; Goodwin, M.S.; Black, M.P.; Lee, C.C.; Audhkhasi, K.; Narayanan, S. Applying machine learning to facilitate autism diagnostics: Pitfalls and promises. J. Autism Dev. Disord. 2015, 45, 1121–1136. [Google Scholar] [CrossRef]
  21. Pioggia, G.; Tartarisco, G.; Corda, D. Real-world opportunities for autism interventions via wearable sensors and emotion recognition systems. Appl. Intell. 2005, 23, 129–141. [Google Scholar]
  22. Barrett, L.F. How Emotions Are Made: The Secret Life of the Brain; Houghton Mifflin Harcourt: Boston, MA, USA, 2017. [Google Scholar]
  23. Bölte, S.; Golan, O.; Goodwin, M.S.; Zwaigenbaum, L. What can innovative technologies do for autism spectrum disorders? Autism 2010, 14, 155–159. [Google Scholar] [CrossRef]
  24. Larrazabal, A.J.; Nieto, N.; Peterson, V.; Milone, D.H.; Ferrante, E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl. Acad. Sci. USA 2020, 117, 12592–12594. [Google Scholar] [CrossRef]
  25. Abd-Alrazaq, A.A.; Alajlani, M.; Alalwan, A.A.; Bewick, B.M.; Gardner, P.; Househ, M. An overview of the features of chatbots in mental health: A scoping review. Int. J. Med. Inform. 2019, 132, 103978. [Google Scholar] [CrossRef]
  26. Gallagher, S. How the Body Shapes the Mind; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  27. O’Regan, J.K.; Noë, A. A sensorimotor account of vision and visual consciousness. Behav. Brain Sci. 2001, 24, 883–917. [Google Scholar] [CrossRef]
  28. De Jaegher, H.; Di Paolo, E. Participatory sense-making: An enactive approach to social cognition. Phenomenol. Cogn. Sci. 2007, 6, 485–507. [Google Scholar] [CrossRef]
  29. de Haan, S. Enactive Psychiatry; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
  30. Maiese, M. Autonomy, Enactivism, and Mental Disorder; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  31. Gallagher, S. Understanding interpersonal problems in autism: Interaction theory as an alternative to theory of mind. Philos. Psychiatry Psychol. 2004, 11, 199–217. [Google Scholar] [CrossRef]
  32. Clark, A. Surfing Uncertainty: Prediction, Action, and the Embodied Mind; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  33. Constant, A.; Bervoets, J.; Hens, K.; Van de Cruys, S. Precise worlds for certain minds: An ecological perspective on the relational self in autism. Topoi 2021, 40, 921–934. [Google Scholar] [CrossRef]
  34. Di Paolo, E.A.; De Jaegher, H. Enacting becoming: Beyond autonomy and heteronomy. Philos. Today 2022, 66, 403–430. [Google Scholar]
  35. Vermeulen, P. Autism and the predictive mind: A mismatch? Autism 2022, 26, 1271–1274. [Google Scholar]
  36. Hutto, D.D.; Jurgens, A. Taking emerging tropes in the neurodiversity debate seriously: The challenge of neurodiverse empathy. Metaphilosophy 2018, 49, 58–76. [Google Scholar]
  37. Birhane, A. Algorithmic injustice: A relational ethics approach. Patterns 2021, 2, 100205. [Google Scholar] [CrossRef]
  38. Chapman, R. The reality of autism: On the metaphysics of disorder and diversity. Philosophies 2021, 6, 15. [Google Scholar] [CrossRef]
  39. Galbusera, L.; Kyselo, M. The intersubjective endeavor of psychopathology: Phenomenology and enactivism. Philos. Psychiatry Psychol. 2019, 26, 237–255. [Google Scholar]
  40. De Jaegher, H. Rigid and fluid interactions with institutions. Phenomenol. Cogn. Sci. 2013, 12, 104–113. [Google Scholar] [CrossRef]
  41. Eigsti, I.M. A review of embodiment in autism spectrum disorders. Front. Psychol. 2013, 4, 224. [Google Scholar] [CrossRef] [PubMed]
  42. Geschwind, D.H. Genetics of autism spectrum disorders. Trends Cogn. Sci. 2015, 19, 408–416. [Google Scholar] [CrossRef]
  43. Safron, A.; Fauvel, T.; Park, H. Ambient intelligence in mental health care. Front. Comput. Sci. 2023, 5, 1042731. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vallverdú, J. Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions. Philosophies 2025, 10, 51. https://doi.org/10.3390/philosophies10030051

AMA Style

Vallverdú J. Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions. Philosophies. 2025; 10(3):51. https://doi.org/10.3390/philosophies10030051

Chicago/Turabian Style

Vallverdú, Jordi. 2025. "Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions" Philosophies 10, no. 3: 51. https://doi.org/10.3390/philosophies10030051

APA Style

Vallverdú, J. (2025). Enactivism, Health, AI, and Non-Neurotypical Individuals: Toward Contextualized, Personalized, and Ethically Grounded Interventions. Philosophies, 10(3), 51. https://doi.org/10.3390/philosophies10030051

Article Metrics

Back to TopTop