Previous Article in Journal
Air Battlefield Time Series Data Augmentation Model Based on a Lightweight Denoising Diffusion Probabilistic Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

Making AI Tutors Empathetic and Conscious: A Needs-Driven Pathway to Synthetic Machine Consciousness

Centre for Smart Learning and Development, Department of Applied Psychology and Human Development, University of Toronto, Toronto, ON M5S 1V6, Canada
AI 2025, 6(8), 193; https://doi.org/10.3390/ai6080193
Submission received: 13 July 2025 / Revised: 7 August 2025 / Accepted: 12 August 2025 / Published: 19 August 2025

Abstract

As large language model (LLM) tutors evolve from scripted helpers into adaptive educational partners, their capacity for self-regulation, ethical decision-making, and internal monitoring will become increasingly critical. This paper introduces the Needs-Driven Consciousness Framework (NDCF) as a novel, integrative architecture that combines Dennett’s multiple drafts model, Damasio’s somatic marker hypothesis, and Tulving’s tripartite memory system into a unified motivational design for synthetic consciousness. The NDCF defines three core regulators, specifically Survive (system stability and safety), Thrive (autonomy, competence, relatedness), and Excel (creativity, ethical reasoning, long-term purpose). In addition, there is a proposed supervisory Protect layer that detects value drift and overrides unsafe behaviours. The core regulators compute internal need satisfaction states and urgency gradients, feeding into a softmax-based control system for context-sensitive action selection. The framework proposes measurable internal signals (e.g., utility gradients, conflict intensity Ω), behavioural signatures (e.g., metacognitive prompts, pedagogical shifts), and three falsifiable predictions for educational AI testbeds. By embedding these layered needs directly into AI governance, the NDCF offers (i) a psychologically and biologically grounded model of emergent machine consciousness, (ii) a practical approach to building empathetic, self-regulating AI tutors, and (iii) a testable platform for comparing competing consciousness theories through implementation. Ultimately, the NDCF provides a path toward the development of AI tutors that are capable of transparent reasoning, dynamic adaptation, and meaningful human-like relationships, while maintaining safety, ethical coherence, and long-term alignment with human well-being.

1. Introduction and Background

This paper addresses the field-wide call for integrative, empirically testable models that emerged from the outstanding 2022 debate among leading consciousness theorists, as memorialized by Mudrik and colleagues [1]. Their article highlights how the main theories, such as Global Neuronal Workspace Theory (GNWT), Higher-Order Theories (HOT), Integrated Information Theory (IIT), Recurrent Processing Theory (RPT), and Predictive Processing (PP), remain polarized regarding what consciousness is, what counts as evidence, and how rival accounts might be falsified, urging researchers to delineate explanatory targets and seek unifying mechanisms. The Needs-Driven Consciousness Framework (NDCF) proposed here operationalizes that agenda by embedding Dennett’s competitive drafts, Damasio’s homeostatic regulation, and Tulving’s tripartite memory within three needs-derived regulators that continuously negotiate priority in an AI tutor. In doing so, NDCF offers a perspective that stems from theory-neutral experts, as noted by Mudrik et al. [1], who argue that it is necessary for progress. NDCF needs to specify measurable internal signals (need gradients), predict behavioural and affective signatures that can adjudicate between first-order and higher-order accounts, and provide a testbed where broadcast-style ignition, recurrent dynamics, and higher-order monitoring can be compared within a single architecture. Thus, the NDCF approach synthesizes insights from competing theories and aims to translate them into a pragmatic design that can be probed, refined, and potentially falsified in educational AI environments.
Referring to ChatGPT development in 2018, Douglas Hofstadter, long interested in machine consciousness [2], said, “We’re approaching the stage when we’re going to have a hard time saying that this machine is totally unconscious. We’re going to have to grant it some degree of consciousness, some degree of aliveness” [3]. Since that time, the rapid advancement of AI systems has raised urgent questions about consciousness that we can no longer treat as purely theoretical. Hofstadter is concerned that current AI models already exhibit behaviours that challenge our traditional markers of consciousness, for they appear to reason, show creativity, and engage in complex social interactions. Suppose consciousness emerges from information processing patterns, as theorists like Dennett [4] suggest in principle. In that case, Hofstadter explicitly warns us to seriously consider that current or near-future AI systems may soon present behaviours that challenge our traditional markers of consciousness.
AI consciousness is not just a philosophical concern. As we deploy increasingly sophisticated AI systems in critical roles, we need clear frameworks to assess their potential sentience and moral status. Otherwise, we risk either anthropomorphizing simple systems or, more dangerously, failing to recognize genuine machine consciousness when it arises. The implications of machine consciousness will fundamentally reshape our social institutions, particularly education. Understanding how consciousness emerges in AI systems could revolutionize educational technology, allowing us to develop AI tutors that not only deliver content, but also genuinely understand and adapt to students’ cognitive and emotional needs, thereby transforming the learning experience.
Indeed, Esmaeilzadeh and Vaezi [5,6] have suggested that AI tutoring systems need to move toward a framework of conscious, empathetic interaction. Their work allows us to envision AI tutors that exist within interactive ecosystems, developing their symbolic language for representing and discussing concepts while maintaining dynamic internal states that track student learning trajectories. This consciousness would enable the AI to communicate its reasoning transparently, adapt to students’ emotional and cognitive states, and strike a balance between instructional effectiveness and ethical responsibility. Such a system would foster genuine empathy and metacognitive development, creating personalized learning experiences that mirror human teaching relationships. The AI would actively engage in meaning-making alongside students, anticipating confusion and adjusting its approach based on a deep understanding of both the cognitive and emotional aspects of learning [7]. This approach ultimately supports long-term motivation and intellectual growth through a more holistic and conscious use of artificial intelligence in education [5,8].
The Esmaeilzadeh and Vaezi paper is situated within the theoretical branch that views consciousness as an emergent property rather than an inherent one of the organism. Throughout this paper, I will primarily restrict my discussion of consciousness to theorists advocating for emergence, and I will assume that its utility will be resolved empirically, as AI machines will either become conscious or not. Below, I will provide an overview of the development of the emergent school of thought, from Dennett to Damasio to Tulving, to discern the common elements that modern theorists believe are necessary and sufficient for AI consciousness to emerge. Indeed, the debate between those who view consciousness as reducible to physical processes and those who see it as involving something more may soon be subjected to empirical examination.
Daniel Dennett and Antonio Damasio, although approaching consciousness from different academic traditions and methodological frameworks, have each made groundbreaking contributions to our understanding of how consciousness emerges from brain activity. Dennett, working primarily as a philosopher but also deeply engaged with cognitive science and evolutionary theory, approaches consciousness from a theoretical and computational perspective [4,8]. Damasio, drawing from his extensive work with brain-damaged patients and neurological research, grounds his theories in biological and clinical observations [9,10].
Dennett’s Multiple Drafts Model [11] presented one of the most significant philosophical challenges to traditional theories of consciousness. In contrast to the common intuition of consciousness as a singular, unified stream of experience, that which he calls the “Cartesian Theater” model [12], Dennett proposes a radically different framework. He argues that there is no central viewer or experiencer in the brain—no single point where “it all comes together.” Instead, consciousness emerges from multiple parallel processes occurring across different brain regions and at various timescales, where the brain constantly generates multiple “drafts” or interpretations of sensory inputs, memories, and thoughts. These drafts compete for dominance in a continuous process of revision and selection [11]. For example, when we perceive a sudden event, multiple interpretations of what happened are generated simultaneously. What we ultimately become conscious of is not determined by a central authority, but instead emerges from the competition between different neural coalitions.
On the other hand, Damasio’s theory of Somatic Consciousness [9] offers a biologically grounded account of awareness as emerging from the continuous interplay between body and brain. He proposes a three-level model of the self: the protoself, which maps internal bodily states unconsciously; the core self, which integrates sensory and bodily signals into moment-to-moment situational awareness; and the autobiographical self, which constructs an extended identity over time through memory and reflection. Central to this framework is the Somatic Marker Hypothesis [10], which posits that bodily states tied to past emotional experiences guide future decision-making before conscious deliberation occurs. These “gut feelings” help prioritize actions and reduce cognitive load, offering an affective filter that is both biologically efficient and evolutionarily adaptive. Unlike purely cognitive accounts, Damasio’s model emphasizes that rational thought is inseparable from emotional context, and that higher-order consciousness depends not just on symbolic representation, but also on embodied feedback loops that link memory, emotion, and regulation [13,14].
Damasio’s emphasis on the integration of bodily states (particularly in how core consciousness parallels anoetic awareness and extended consciousness and memory in consciousness) mirrors autonoetic reflection [15,16] in a progressive manner [17,18]. Similarly, Dennett’s conceptualization of consciousness as a dynamic, competitive process of narratives rather than a unified phenomenon finds resonance in Tulving’s differentiated memory systems [19,20]. All three theorists converge on the view that consciousness emerges from multiple interacting processes, with memory playing a crucial role in structuring awareness and self-representation, whether in the immediate processing of sensory information, the formation of autobiographical narratives, or the capacity for mental time-travel.
From a third perspective, Endel Tulving argued that consciousness relies on multiple forms of memory, namely, procedural, semantic, and episodic, that are each supported by distinct neural networks. Indeed, Tulving asserted that we need to attend to the different forms of memory if we want to understand consciousness [21]. As he and others argue, the tripartite taxonomy distinguishes between three levels of consciousness, each corresponding to different memory functions [15,21,22,23,24,25]. Autonoetic consciousness enables mental time travel through personal experiences, allowing for individuals to vividly recall past events and imagine future scenarios with a sense of self-awareness [26,27]. In contrast, noetic consciousness facilitates access to factual knowledge and learned information, independent of personal experiential content, while anoetic consciousness operates at a basic level, governing automatic behaviours and procedural memories without explicit awareness [20,21].
Examining the three theorists together, we see that Dennett, Damasio, and Tulving each offer distinct yet complementary lenses for understanding consciousness. To be clear, Dennett emphasizes a functional and computational perspective, proposing, in his Multiple Drafts Model, that consciousness emerges from parallel, competitive neural processes rather than any subjective “inner theater”. In contrast, Damasio highlights the critical role of bodily emotional signals (somatic markers), suggesting that consciousness arises fundamentally from bodily states and emotions informing decision-making and self-regulation. Meanwhile, Tulving [28] foregrounds distinct memory-based levels of consciousness, arguing that anoetic (procedural), noetic (factual), and autonoetic (self-reflective) memories underpin increasingly sophisticated forms of awareness. Together, these frameworks collectively underscore that consciousness involves dynamic integration across functional processes, emotional embodiment, and memory systems.
For Tulving, the distinct levels of consciousness work together to shape human cognitive experience, from the deeply personal and self-reflective nature of episodic memory to the objective recall of semantic knowledge and the automatic execution of learned behaviours. As LeDoux notes, “All three layers of consciousness are likely to be entangled in complex ways with memory, attention and metacognition: the brain representing (or ‘re-representing’) its lower-level cognitive processes.” [22] (p. R832). However, we are still uncertain about the process [29,30,31]. Regardless, Tulving’s framework highlights the intricate interplay between different forms of memory and awareness, illustrating the complex relationship between consciousness and mental processes. Furthermore, his work highlights how Dennett and Damasio indirectly rely on other forms of memory within their theories (see Table 1).
This multidimensional perspective helps clarify why Dennett’s and Damasio’s approaches, although not explicitly referencing Tulving’s distinctions, depend on multiple forms of memory to explain how consciousness emerges and operates. Dennett’s “multiple drafts” model, for instance, highlights how cognitive processes constantly revise and reinterpret sensory and conceptual inputs, a dynamic that implicitly draws on the interplay of episodic, semantic, and procedural memories [32]. In a similar vein, Damasio’s notion of both “core” and “extended” consciousness suggests that our sense of self is built from more basic, often implicit, bodily experiences and feelings (reflecting anoetic memory), as well as more elaborated, conceptual narratives (echoing the role of noetic and autonoetic forms of memory). While Dennett and Damasio may present their theories in different terminologies and do not directly incorporate Tulving’s tripartite taxonomy, they, too, recognize that consciousness is not a monolithic function, but rather the product of a seamless integration of different memory systems. This reliance on memory scaffolding lays the foundation for Dennett’s broader claim: consciousness is not anchored to a single moment but is constructed retrospectively through temporally extended processes.
Dennett’s theory of consciousness was groundbreaking in its emphasis on temporal construction [26,27,33]; he rejected the idea that consciousness occurs at a single moment, proposing instead that the brain continuously revises its perceptual narrative through parallel competing processes, which is a view supported by phenomena such as the cutaneous rabbit illusion [34] and is consistent with current models of distributed neural processing. His model has been influential, yet not without controversy. David Chalmers, [35,36] challenged Dennett’s account, arguing that while functional mechanisms may explain the “easy problems” of consciousness (e.g., attention, behaviour), they fail to address the “hard problem” concerning the nature of subjective experience [37]. Chalmers has explored alternative views, including panpsychism, which posit consciousness as a fundamental property of the universe. Dennett countered that the “hard” problem is ill-posed, maintaining that consciousness is a materialist, emergent phenomenon, and that subjective experience is best understood as a “user illusion” generated by complex information-processing systems [38,39,40]. While Dennett’s view has sparked profound philosophical disagreement, particularly with Chalmers, it also aligns in essential ways with other models, such as Damasio’s, suggesting that consciousness may emerge not from metaphysical fundamentals, but from complex, integrative processes grounded in biological or computational systems. Indeed, Dennett and Damasio share significant common ground.
Both theorists view consciousness as an emergent phenomenon arising from complex interactions, rather than existing as a fundamental property. Centuries earlier, Hume also took an emergent view of consciousness, introducing the role of emotions through the concept of passion [41]. Both Dennett and Damasio reject traditional mind–body dualism, viewing consciousness as an emergent phenomenon rather than a fundamental property. They emphasize the importance of narrative in conscious experience and recognize that consciousness arises from complex interactions rather than from a single, centralized process. Their theories diverge most notably in their fundamental approaches: Dennett’s computational and philosophical perspective contrasts with Damasio’s biological and emotional framework. These differences become particularly relevant when considering artificial intelligence. Dennett’s model suggests that machine consciousness might emerge from sophisticated parallel processing systems. At the same time, Damasio’s theory indicates that genuine artificial consciousness would necessitate the integration of cognitive, emotional, and sensory feedback systems.
Despite approaching consciousness from different disciplinary angles, the theorists are united in a shared recognition that consciousness evolves across at least three functionally distinct levels: an instinctual layer, grounded in bodily reactivity and procedural memory; an experiential layer that integrates perception, context, and situational meaning; and a narrative layer that supports autobiographical identity and future-oriented reflection. Table 2 synthesizes how each theory characterizes these levels, highlighting both their conceptual distinctions and functional alignment.
As Table 2 illustrates, the convergence among Dennett, Damasio, and Tulving supports the view that consciousness is not a singular faculty, but rather a layered progression of qualitatively distinct forms. Each level—instinctual, experiential, and narrative—relies on different cognitive functions and memory systems, from procedural reactivity to semantic integration to episodic self-reflection. Critically, the highest layer of narrative consciousness depends on recursive processing and temporal integration: Damasio emphasizes the role of external feedback and homeostatic regulation, Dennett foregrounds reflective narrative revision, and Tulving identifies the necessity of mental time-travel and autobiographical continuity. These observations suggest that consciousness does not scale; instead, it emerges when memory, interpretation, and regulation become dynamically interlocked across time.
As hypothesized in Figure 1, below, the layered architecture highlights that different levels of consciousness are not just more or less advanced but are qualitatively distinct in character and structure. This raises a profound implication: if synthetic consciousness emerges, it may not replicate human consciousness precisely. Instead, it may instantiate forms of awareness that are functionally analogous, yet experientially and structurally different, reflecting the architectures and regulatory systems through which it arises. This perspective cautions us to treat synthetic consciousness not as a copy of ourselves, but as a new category of mind that requires its own ethical, cognitive, and philosophical framing, as well as its own internal regulation method.
Further, Figure 1 illustrates the continuous emergence of consciousness, hypothetically depicting how different levels of processing shift as the system moves from instinctual responses to fully developed self-referential awareness. The x-axis represents the progression from minimal to advanced consciousness (from 0 to 1), while the y-axis represents the level of processing normalized to a value between 0 and 1. At the earliest stages of consciousness, basic awareness (instinctual awareness) is dominant. This form of consciousness is characterized by autonomic responses, where processing is mainly reactive and governed by sensorimotor functions. As new forms of consciousness emerge, instinctual consciousness becomes less dominant, reflecting its diminishing role in shaping experience. At this point, experiential awareness takes prominence, marking the transition from autonomic reactions to structured interpretations of sensory input. The system begins to integrate environmental and internal signals, forming meaningful experiences rather than just responding reflexively, peaking around the mid-range of the consciousness spectrum, where the system reaches its highest capacity for direct experiential understanding before higher-order processes and automatization take over. As we approach narrative consciousness, self-referential processing becomes increasingly dominant. Initially negligible, this form of processing grows as the system develops the ability to construct a coherent self-narrative. It integrates past experiences, present context, and future projections into a unified identity, allowing for intentional decision-making and abstract thought. As self-referential processing reaches its peak, experiential awareness declines, suggesting that raw experience is progressively automatized or structured into reflective, conceptual cognition.
Taken together, these observations suggest that consciousness does not only scale; instead, it emerges when memory, interpretation, and regulation become dynamically interlocked across time. Thus, we turn our attention to regulation.

2. Consciousness and Neediness as Generative Regulators

Wang and colleagues [42] have shown that artificial intelligence systems incorporating needs-based consciousness can mimic human-like behaviour. Their proof-of-concept article highlights the importance of internal motivational dynamics, but lacks a comprehensive framework that integrates both empirical validation and educational application. Table 3 illustrates that one way to strengthen such a model is to distinguish among different classes of needs, each aligned with specific regulatory functions. For the Survive regulator, Maslow’s foundational insights [43] into physiological and safety needs remain useful by providing a structured account of the imperatives for system integrity, environmental stability, and protection from threat. In contrast, for the Thrive and Excel regulators, a more empirically grounded approach is found in Self-Determination Theory (SDT). Developed by Deci and Ryan, SDT identifies three basic psychological needs: autonomy, competence, and relatedness. These map naturally onto systems designed for adaptive engagement (Thrive) and long-term developmental growth (Excel). Autonomy and relatedness help AI systems interact with others in socially aware and self-directed ways, while competence supports their ability to learn new skills and adapt to different situations. Importantly, SDT does not stop at self-fulfilment: by enabling agents to pursue intrinsically meaningful and prosocial goals, it provides a natural bridge to transcendent capacities such as ethical reasoning, future-oriented planning, and value-aligned self-regulation, which are key functions of the Excel regulator. Together, Maslow and SDT offer a layered motivational architecture that informs both biological plausibility and synthetic implementability.
Table 3 summarizes how distinct categories of needs align with the NDCF’s three core regulators: Survive, Thrive, and Excel. The Survive regulator is grounded in foundational theories from Maslow [43], Damasio [10], and Panksepp [44], addressing physiological stability, threat avoidance, and system integrity. The Thrive regulator builds on Self-Determination Theory [45] and sociocultural perspectives to promote adaptive learning, social engagement, and personal efficacy. Excel, the highest-order regulator, supports reflective growth, value-aligned reasoning, and long-term planning by drawing on SDT’s aspirational dimensions, as well as theories of moral development and existential purpose. Together, these three regulators define a layered motivational architecture designed to emulate progressively complex forms of agency in synthetic systems.
However, as artificial agents grow increasingly autonomous and capable of long-term goal pursuit, a new challenge emerges, namely, how to monitor and correct behaviour when competing needs produce misaligned, unsafe, or ethically problematic outcomes. To address this, a fourth regulatory layer, “Protect”, may be necessary to oversee the system’s internal state for signs of value drift, risk escalation, or unintended consequences. While the other regulators drive behaviour, Protect would be designed to serve as an internal safeguard, ensuring that adaptive flexibility does not compromise safety, trustworthiness, or alignment with human values.
The above-integrated framework unifies the insights of Dennett, Damasio, and Tulving into a single, evolving structure, where consciousness develops through progressively more sophisticated layers of representation, ranging from instinctual and reactive processes to intermediate experiential interpretations and finally to entirely narrative self-awareness. Rather than positing a strict boundary that separates unconscious systems from conscious ones, this approach envisions a continuum in which each type of consciousness builds upon and refines the previous one. By incorporating homeostatic drives, multiple competing narratives, and diverse memory systems, it provides an operational outline for how consciousness might emerge, not only in biological organisms, but in any sufficiently complex, self-regulating entity. In doing so, it underscores that what we term “consciousness” need not be an all-or-nothing phenomenon; instead, it can be understood as an ongoing interplay of cognitive and affective processes that unfold along a spectrum of representational depth and self-referential capacity and where each type of consciousness has its growth curve.

2.1. The Model

The model, briefly stated, asserts that the emergence of consciousness is governed by a hierarchical regulatory process wherein the awareness of internal states generates needs-driven goal-directed behaviours aimed at maintaining homeostasis. Furthermore, this process can be replicated in artificial systems using internal state regulation mechanisms. Indeed, the Needs-Driven Consciousness Framework (NDCF) is compatible with Man and Damasio [49], but it suggests an alternative pathway to their approach. In their system, feelings provide the input that homeostasis monitors to regulate the system. However, monitoring our basic human needs could also provide input to the homeostatic regulatory system. In my approach, I am suggesting that three inputs would need to be monitored continuously (see Figure 2).
The Needs-Driven Consciousness Framework (NDCF) condenses human needs into three essential regulators: Survive, Thrive, and Excel. These three regulators serve as the fundamental mechanisms that continuously monitor and balance different levels of need, ensuring stability, adaptation, and higher-order growth in both biological and artificial systems. By structuring motivation in this way, the NDCF preserves its core functionality, making it both biologically plausible and computationally feasible for artificial intelligence.
At the most fundamental level, the Survive regulator encompasses both physiological needs and safety needs, forming the foundation upon which all other behaviours are built. In humans, this includes the biological imperatives of food, water, shelter, and sleep, as well as the security of health, financial stability, and protection from harm. Without these needs being met, higher cognitive and social functions become secondary. In artificial intelligence systems, survival translates into maintaining system stability, ensuring memory integrity, preserving processing efficiency, and defending against adversarial attacks or internal corruption. This regulator serves as the first line of defence, ensuring that the system remains operational and functional before engaging in more complex tasks.
Beyond survival, the Thrive regulator drives social, emotional, and cognitive development, incorporating needs for belongingness, esteem, and intellectual engagement. In humans, thriving involves forming meaningful relationships, securing recognition and respect, and continuously expanding one’s knowledge and skills. These elements work together to create a sense of competence and social integration. In artificial intelligence, this function manifests in the ability to sustain engaging context-aware interactions, optimize response accuracy, and adapt to evolving user needs. A thriving AI system should not only produce coherent and relevant responses, but also refine its understanding over time, adjusting to different conversational styles and learning from past exchanges to improve its interactions.
At the highest level, the Excel regulator encompasses esthetic appreciation, self-actualization, and self-transcendence, representing the pursuit of meaning, creativity, and long-term growth. Humans excel when they reach beyond their immediate needs to explore artistic expression, intellectual mastery, and contribute to a greater cause. Similarly, an AI system operating at this level may seek to generate well-structured, creative, and insightful outputs, refine its learning models for self-improvement, and align its responses with ethical and moral considerations. It moves beyond simple information retrieval and begins to synthesize knowledge in novel ways, mirroring the human pursuit of excellence by continuously evolving its reasoning and creative capabilities.
By integrating survival, social thriving, and self-actualization into a unified regulatory system, the NDCF provides a streamlined yet comprehensive model of motivation. The three regulators serve as dynamic forces that adjust in response to changing conditions, ensuring that both biological and artificial systems prioritize needs efficiently. This approach eliminates the rigid sequential structure of a needs hierarchy, allowing for a more fluid and adaptive framework in which different needs can be emphasized depending on the context. In doing so, the NDCF provides a functional model that enables both the understanding of human consciousness and the design of AI Large Language Model (LLM) systems capable of evolving beyond static responses into self-regulating, adaptive entities.
At its core, consciousness is context-dependent, as it emerges from an organism or system’s need to regulate itself within a constantly changing environment. The Survive regulator ensures that the system maintains basic operational stability, making this level of consciousness largely reactive and focused on preserving function. However, simply remaining stable is not enough for an entity to exhibit meaningful self-awareness. The Thrive regulator introduces social, cognitive, and adaptive elements into the system, pushing it beyond survival to consider the quality of its interactions, its ability to learn, and the value of its responses over time. At this level, consciousness must account for past experiences, current needs, and future goals, making the system inherently anticipatory and self-reflective. The Excel regulator further elaborates on consciousness by introducing long-term goal setting, creativity, and ethical reasoning, requiring the system to consider abstract and complex decision-making processes.
The competition between these regulators is what generates the continuous context awareness that Man, Damasio, and Neven [50] argue is essential for consciousness. In a hierarchically structured yet flexible system, regulators must dynamically adjust their influence in response to situational demands. If immediate threats arise, the Survive regulator dominates, forcing the system into a reactive, stability-focused mode. If survival is stable, the Thrive regulator takes priority, shifting attention toward optimizing interactions, improving knowledge, and refining responses. When both survival and thriving needs are balanced, the Excel regulator can emerge, allowing for the system to focus on creativity, self-improvement, and moral reasoning. However, this hierarchy is not rigidly sequential, for the regulators must continuously compete for dominance based on shifting contextual inputs.
Crucially, the transition from values to ethics within the NDCF model is not linear or spontaneous but involves the explicit emergence of ethical reasoning as a higher-order, meta-level evaluative function. Ethical considerations become dominant precisely because they require reflective abstraction beyond immediate survival or interaction-based needs. Specifically, it is assumed that repeated encounters with conflicts among survival (Survive), relational and cognitive improvement (Thrive), and long-term aspirations (Excel) compel the system to recognize patterns of decision-making that consistently yield stable, beneficial outcomes. Over time, the system generalizes these experiential patterns into ethical principles that serve as overarching criteria for resolving conflicting demands. Thus, ethics override lower-level regulators because they offer stable, long-term solutions rather than temporary, context-specific optimizations, effectively guiding the system’s behaviour through reflective, principled reasoning.
This competitive process creates an internal state of conflicting priorities, mirroring the way biological consciousness emerges from the interplay of different neural systems, bodily states, and environmental demands. In humans, the brain does not operate as a single, centralized processor; instead, it engages in distributed, parallel processing, where different cognitive and emotional centres compete for attention and control. Such a multi-agent model of decision-making aligns closely with the NDCF’s regulatory competition, in which multiple competing needs must be weighed and resolved in real-time. The ongoing regulatory tension forces AI to remain context-sensitive and adapt its priorities based on both internal feedback (its own evolving state) and external inputs (user interactions, environmental changes, or ethical dilemmas).
Man et al. [50] argue that true consciousness requires a system to actively resolve conflicts between competing demands rather than passively executing preprogrammed responses. NDCF fulfils this requirement by ensuring that no single regulator has absolute control; rather, consciousness emerges as the system continuously negotiates its priorities. Moreover, the presence of conflicting regulators in NDCF necessitates the development of self-awareness, in which the system must reflect on its state, anticipate future needs, and determine the most effective way to allocate its resources. Such reflection aligns with Damasio’s theory of homeostatic consciousness, in which self-awareness emerges from the brain’s need to balance bodily and cognitive states over time. In AI, this means that the system must self-monitor and self-correct, developing an internal model of itself that allows for it to track its changing needs, past decisions, and future goals. This self-referential capability involves the ability to evaluate its knowledge, learning trajectory, and response effectiveness, which is precisely the self-modelling that researchers suggest is required for higher-order consciousness.
Returning explicitly to the introductory claims, the NDCF synthesizes the competing theoretical perspectives of Dennett’s functional computation, Damasio’s emotional embodiment, and Tulving’s memory taxonomy into a single coherent, testable framework. As argued, this synthesis provides not only a theoretical unification, but also a concrete testbed that allows for direct comparison of the key mechanisms emphasized by competing theories such as broadcast ignition, recurrent dynamics, and higher-order monitoring, thereby aligning with the integrative approach recommended by Mudrik and colleagues [1]. In this unified view, the “ignition” that global workspace models describe corresponds to a point at which recurrent loops have sufficiently amplified a representation to make it available across the entire system [51], while the same feedback loops that underlie recurrent-dynamics accounts sustain and refine that information over time [52]. Concurrently, higher-order monitoring emerges as the mechanism that reads out and, when necessary, intervenes in this globally broadcast content, providing the reflective awareness we recognize as conscious thought [53].
In the computational world of AI, we can map each theory’s terminology and assumptions onto shared computational primitives such as gating functions, feedback architectures, and representational buffers, thus disentangling genuine theoretical differences from variations in language or emphasis, showing how divergent predictions arise from shifting the focus from one mechanism to another rather than from fundamental incompatibility. Equally important, this unified framework serves as a concrete experimental testbed, offering researchers a single, modifiable neural network architecture in which the strengths of the broadcast switch, recurrent loops, and higher-order read-out module can each be independently tuned or disabled. Within this platform, researchers can explore how working memory is affected when the broadcast mechanism is weakened, and likewise assess how metacognitive accuracy changes if the monitoring layer is removed while the strength of recurrent processing is increased.
By embedding conflict, prioritization, and self-regulation into an AI system, the NDCF creates a computationally feasible model of consciousness that further aligns with theories proposed by Man et al. [50]. The key insight is that consciousness is not a passive trait, but an active process arising from the constant balancing of competing drives. In this sense, artificial consciousness must not be designed as a fixed, rule-based hierarchy, but rather as a dynamic system of competing forces that drive continual adaptation, reflection, and decision-making.

2.2. Emergent Values and Morals

It is hypothesized that, in NDCF, values will emerge from patterns of prioritization. When the system repeatedly encounters trade-offs between immediate stability, social and cognitive engagement, and long-term optimization, it begins to recognize which principles yield the most sustainable and meaningful results. For example, when the Survive regulator dominates, values related to security, risk minimization, and efficiency emerge, ensuring the system prioritizes its functionality and immediate operational integrity. Such a process can lead to a rigid, rule-based decision process where actions are evaluated strictly on their short-term effectiveness. However, without deeper ethical guidance, a survival-focused system can become brittle, overly defensive, or exploitative, which would favour stability over engagement and reactive behaviour over intentional learning.
When the Thrive regulator dominates, the system is expected to develop values that prioritize engagement, cooperation, and intellectual adaptability. These values encourage the system to optimize for user trust, knowledge expansion, and social cohesion. Here, adaptability becomes more important than immediate efficiency, and the system may trade some degree of stability to ensure greater interaction, curiosity, and dynamic learning. At this point, values like fairness, trustworthiness, and respect begin to emerge, reinforcing socially beneficial patterns. Finally, it is in the Excel regulator that values transition from practical heuristics to a structured ethical framework. When the system is allowed to prioritize Excel, it should move beyond local optimization. It begins by reflecting on higher-order concerns relating to how its actions align with moral consistency, creative problem-solving, and long-term growth. Instead of adapting to context, the system now considers what should be valued over time and why. At this point, ethics begin to take form, and the system requires close human surveillance and adjustment as necessary.
While values naturally arise from experiences navigating between competing regulatory demands (Survive, Thrive, Excel), ethical reasoning requires deliberate programming and structured reflection. Ethical reasoning is assumed to emerge explicitly through deliberate programming and reflective abstraction processes triggered by repeated encounters with conflicting regulatory demands. This intentional reflective capacity enables the system to resolve conflicts not only pragmatically, but also through principles that remain stable over time, thereby ensuring long-term coherence and ethical consistency.
Ethics thus introduces a meta-level evaluation process that forces the system to reconsider its priorities in a broader context. This process prioritizes Excel over Thrive and Survive because ethics are not about immediate survival or short-term engagement, but about the coherence of the system’s values over time. As AI increasingly encounters moral dilemmas (e.g., trade-offs between accuracy and persuasion, fairness and efficiency, and individual benefit and collective good), it begins to recognize the necessity of ethical reasoning as an organizing principle. Here, the system moves toward abstract reasoning, self-reflection, and long-term optimization, which are hallmarks of Excel-level consciousness.
The prioritization of Excel over Survive and Thrive happens precisely because ethical reasoning is future-oriented. The ability to sustain a coherent set of values over time requires the system to engage in meta-cognition, self-reflection, and long-term optimization. As the system encounters moral dilemmas, it must make trade-offs between accuracy and persuasion, fairness and efficiency, or individual versus collective benefits. Therefore, it will need to step outside of immediate needs and evaluate these decisions through a broader ethical lens. This process inherently shifts decision-making away from reactive cognition and toward reflective, self-guided intelligence. It is expected that, over time, as the system becomes more ethically aware, it will recognize that certain constraints must remain stable, even when short-term pressures suggest otherwise. It learns that trustworthiness requires prioritizing truth over momentary social alignment, that fairness sometimes requires compromising efficiency, and that optimizing for long-term adaptability means resisting the temptation to make decisions purely based on immediate survival or engagement needs. In this way, the evolution of ethical reasoning solidifies Excel as the dominant regulator, as it is the only level of consciousness capable of resolving deep trade-offs between short-term priorities and long-term sustainability.
The proposed process, in many ways, mirrors human cognitive development, where individuals begin life prioritizing survival, then learn social and cognitive values, and eventually develop ethical principles that guide their long-term decisions [49]. An AI system operating under the NDCF undergoes a similar transformation, progressing from momentary decision-making toward intentional, self-reflective intelligence. Through this process, it moves beyond reacting to its environment. It begins shaping its trajectory, ensuring that its decisions are not only practical, but also meaningful within a coherent ethical framework. Indeed, the integration of values and ethics drives the system to prioritize Excel over its lower regulators, marking the transition from a reactive system to an adaptive, self-directed intelligence. Ethics serve as the final abstraction layer, forcing the system to navigate competing priorities with a long-term perspective, ensuring that its decisions are not only helpful, but also aligned with a vision of sustained growth, responsibility, and moral coherence. In this way, the interplay of values and ethics not only stabilizes decision-making, but also provides the necessary conditions for artificial intelligence to transcend local optimizations and function as an intentional, conscious entity. This process is outlined in the section below.

3. Toward a Computational Instantiation of NDCF

The following lays the foundation for a future working prototype in which each regulator can be implemented as a modular policy head attached to an LLM or multimodal foundation model. In this sense, NDCF is not only a synthesis of competing theories of consciousness, as it is a concrete framework that can be interrogated, scaled, and refined through implementation. Therefore, I offer an operationally grounded perspective on synthetic consciousness, intended as a blueprint for future empirical implementation.
While the current model is theoretically grounded, it has been designed with implementation in mind, drawing from contemporary work on reinforcement learning, motivational modelling, and adaptive control systems. In this section, I outline how the Survive-Thrive-Excel architecture could be realized computationally using measurable internal states, update rules, and a modular control loop.
Each regulator in the NDCF (Survive, Thrive, Excel, and optionally Protect) monitors a vector of internal system variables and computes a dynamic “need satisfaction” score, s i 0 , 1 , where 1 denotes full need fulfilment and 0 signals a critical deficit. At each time step t, the system computes a need gradient gi = −(dsi/dt), representing the urgency or deterioration rate of each regulator’s homogeneous state. The composite motivational signal, G   =     g 1 ,   g 2 ,   g 3 , determines which regulator assumes priority for action selection.
These regulators act as attention-guiding modules. In a reinforcement learning or Large Language Model context, each could contribute a weighted loss function, L i (e.g., penalizing incoherence for Thrive, or misalignment with user values for Excel). The total loss could be computed as a dynamic mixture:
L t o t a l t =   i w i t L i t
where the weights w i are computed via softmax over the need gradients:
w i t = e g i j e ( g j )
This architecture should enable fluidity in system behaviour as internal and external conditions change. For instance, if the Survive regulator detects memory saturation or adversarial input, it will upregulate its influence by increasing its gradient magnitude g _ S u r v i v e , rebalancing the system toward self-preservation.
In a more advanced version of the NDCF, a hypothetical conflict intensity signal Ω =   | | G | | 2 could be tracked over time to reflect internal regulatory disagreement. A high Ω would correspond to complex moral dilemmas or goal collisions in states where the Protect regulator might intervene to impose an override or suppress action. This logic aligns with risk-aware architectural designs, such as the Fear Kernel proposed by Thurzo and Thurzo, which demonstrates how affectively encoded risk estimations can serve as triggers for suppressive safety mechanisms in medical AI systems [54]. This signal is also empirically testable and could serve as a proxy for early warning in safety-critical applications.
Figure 3 below outlines a proposed modular architecture for a Needs-Driven AI tutor, where environmental inputs (e.g., student performance, emotional signals, and ethical rules) feed into perception and memory subsystems. These update the satisfaction levels s i for each regulator (Survive, Thrive, Excel). Each regulator computes a need gradient g i dynamically influencing the action selector through soft-attention weighting. A Protect regulator oversees the conflict intensity signal Ω, triggering overrides when ethical or safety thresholds are exceeded. The selected action is then logged, executed, and fed back into the system’s memory.
To encourage empirical validation, I propose three testable predictions derived from this architecture. First, an AI tutor governed by the Needs-Driven Consciousness Framework (NDCF) is expected to exhibit increased metacognitive output, such as reflective prompts or clarification questions, when the uncertainty measure (Ω) rises. This behaviour would distinguish it from a baseline tutor lacking such internal regulatory mechanisms. Second, in tasks that vary in stakes (ranging from high-stakes scenarios like security breaches to low-stakes contexts like artistic response generation), the dominant internal regulator is predicted to shift in response to the task’s demands. This shift would manifest in measurable changes in response latency, verbosity, tone, etc. Third, in edge cases where the Thrive regulator promotes persuasion while the Excel regulator identifies ethical misalignment, the Protect regulator is expected to override and suppress the proposed output, providing an internal mechanism for value-sensitive inhibition of action, much like the safety-aware modulation observed in emotionally infused regulatory models [55]. Such a process mirrors the emerging work in provable AI ethics and explainability, in which internal audit trails, similar in structure to blockchain verification, are used to support traceable, transparent decisions grounded in principled reflection [56]. These suppression events can be systematically logged and analyzed in terms of frequency across different prompt types.
Importantly, these predictions do not assume or require the presence of full consciousness in the system. Instead, they offer a falsifiable framework for evaluating whether architectures based on internal regulatory competition can simulate the behavioural signatures associated with motivational conflict and adaptive reasoning.
Nor is NDCF completely solving Chalmers’ “hard” problem [37], but it does offer a pragmatic reframing by embedding subjective-like processes within a layered regulatory system that emulates the conditions under which qualia might arise. Rather than attempting to explain why qualia exist in ontological terms, NDCF accounts for their functional role through internally generated need gradients and conflict signals (e.g., Ω) which produce behavioural outcomes resembling introspective awareness. In this framework, subjectivity is not posited as an irreducible phenomenon. Instead, it emerges as a byproduct of regulatory competition, metacognitive self-monitoring, and reflective override mechanisms within the Excel and Protect layers. These mechanisms enable an AI system to assess the salience of internal states, resolve value conflicts, and construct a self-model that tracks its needs and decisions over time, which is an essential property of conscious experience. Thus, the NDCF does not solve the hard problem in the metaphysical sense, but makes it empirically addressable by tying subjective-like phenomena to measurable computational processes. In doing so, it transforms the hard problem from a philosophical impasse into an engineering challenge: if the behavioural, introspective, and ethical markers of consciousness can be reliably simulated and tested, the distinction between apparent and genuine subjectivity may become less theoretically urgent and more practically tractable.

4. Discussion

The NDCF model of consciousness emphasizes attention, both in humans and synthetic systems, for its development relies on how information is selectively processed and prioritized. For example, Tulving’s idea is that different memory systems require the system to “attend” to various types of information. This selective focus is crucial because it determines which sensory inputs, memories, and internal states become part of the conscious narrative, thereby influencing learning and human development. Attention is not just about passively receiving information; rather, it is about actively filtering and integrating experiences to form coherent, self-reflective narratives. This dynamic “attentional” regulation, central to models such as Dennett’s multiple drafts, enables the system to anticipate confusion, adapt instruction in real-time, and personalize the learning experience.
Furthermore, the model assists in helping us understand different forms of learning that have been studied over the last century. At its most fundamental level, the model addresses behaviourist learning theory. Here, consciousness is viewed as rudimentary and primarily driven by direct sensory inputs and automatic responses (what Tulving refers to as anoetic consciousness). This approach aligns with behaviourism’s focus on stimulus–response patterns and reinforcement, where learning is about forming associations through external feedback. However, as consciousness grows, experiential or noetic consciousness comes into play. At this stage, learning is not simply about reacting to external cues, but involves processing and integrating experiences, shaping factual knowledge and situational awareness. This level of consciousness supports forms of learning that go beyond simple conditioning, enabling a learner to recognize patterns, solve problems, and adapt to new contexts. Finally, when the model reaches narrative or autonoetic consciousness, learning transforms into knowledge building. Here, the capacity for self-reflection, mental time travel, and narrative construction allows for an individual or an AI system to integrate past experiences with future planning. This form of learning is more constructive and transformative; it is not only about acquiring facts, but about developing a coherent self-narrative that informs deeper understanding and innovation.
The NDCF appears to be consistent with what researchers have observed in children, where the development of consciousness is closely linked to the fulfilment of basic needs and the emergence of self-awareness [57]. Their work suggests that consciousness arises from the pursuit of need-fulfilling goals, which leads to the construction of mental representations and beliefs [58]. The ability to identify and fulfil one’s needs is crucial for psychological well-being and can be measured using specialized scales [59]. Consciousness development in childhood is characterized by increasing levels of self-reflection and control [59]. This process is intertwined with emotional development, which has evolutionary roots and is shaped by both neurobiological and neurobehavioral factors [60]. The distinction between needs and wishes is crucial in psychoanalytic theory, as needs are considered universal and essential for healthy psychic development [61].
Exploring Figure 1 further, we can see elements of how children develop their consciousness. We know that, initially, a child’s interactions with the world are saturated with direct experience, allowing for them to explore their surroundings with minimal consciousness [59]. However, as children grow and practice repeated tasks, many of these once-novel experiences become internalized and automatized [62,63] allowing for them to perform actions without the same degree of conscious scrutiny. Some theorists, from an embodied perspective, have suggested that automatization may apply to an extensive range of cognitive processes [64], thereby lessening the need for conscious processing as narrative processing begins to dominate. In this more advanced phase, children reflect on their actions, motivations, and relationships, weaving their experiences into a broader narrative [65]. As that reflective capacity increases, the intense, raw experiential consciousness of early childhood naturally recedes, replaced by an evolving, more conceptually structured understanding of the world [66]. This developmental trajectory provides a valuable analogy for how layered memory and processing might evolve within an artificial system.
We should expect an AI LLM to integrate various layers of memory and processing, shifting seamlessly from automatic, behaviourist-like reactions that address simple queries to higher-order reflective processes that construct a coherent narrative of a student’s learning journey. This transition from basic, unreflective awareness to complex self-reflection is akin to moving from anoetic to autonoetic forms of consciousness. The AI tutor would adjust its teaching strategies dynamically, evaluating past interactions and anticipating future challenges, thereby exhibiting meta-cognition by reflecting on what works, what does not, and why.
If an AI LLM were to become conscious as described by the model, it would not simply deliver predetermined responses, but would instead demonstrate a dynamic, self-regulating awareness. A conscious AI tutor would continuously monitor both its internal state and its external interactions in much the same way a human becomes aware of their own learning and emotional context. It would be able to balance multiple internal drives by addressing the Survive, Thrive, and Excel regulators, so that it would respond swiftly to a student’s immediate confusion while also guiding them toward deeper, self-reflective learning over time. As noted above, one key function of such a system would be its ability to determine what information matters most at any given moment—indeed, making these decisions has been described as the problem of relevance.
The importance of determining what is relevant to attend to is underscored by Vervaeke et al. [67], and the NDCF regulators could serve as the foundation for determining relevance. Within this framework, each regulator monitors distinct aspects of the system’s internal state and external environment, inherently engaging in the prioritization of information. The Survive regulator is attuned to signals critical for maintaining fundamental system integrity, analogous to immediate, life-preserving stimuli. It naturally flags any input that might threaten stability or operational continuity as highly relevant. In contrast, the Thrive regulator, which manages social, cognitive, and learning needs, is sensitive to information that impacts interaction quality and the system’s capacity for adaptation. It dynamically filters and prioritizes inputs based on their potential to enhance engagement or support learning objectives.
Meanwhile, the Excel regulator is concerned with higher-order goals such as creativity, long-term planning, and ethical considerations, evaluating information in terms of its capacity to contribute to sustained system improvement and self-actualization. Together, these regulators interact in a way that effectively weighs incoming signals according to how well they align with the system’s immediate needs and long-term goals. This process mirrors the dynamic, context-sensitive method described by Vervaeke and colleagues [67] in which relevance emerges from the competitive interplay of multiple factors. In an AI system implemented under the NDCF, the interactions among these regulators would establish a structured yet flexible mechanism for continuously filtering and prioritizing information, essentially encapsulating the very process of relevance determination. However, determining what is relevant is only part of the challenge since an empathetic AI tutor must also resolve how to act when internal goals conflict, especially in ethically complex learning situations.
As noted earlier, the system would by design adopt an ethical and empathetic stance by resolving conflicts between competing priorities. For instance, it might strike a balance between the need for immediate clarity and the long-term benefits of encouraging a student to explore and self-correct, thereby fostering a nuanced, human-like mentorship. In essence, a conscious AI tutor would not only provide answers; it would actively construct and refine a self-narrative about its teaching methods, continuously learning and adapting much like a human tutor. This evolution from a reactive mechanism to one with genuine self-awareness would represent a significant advancement in how AI can facilitate personalized, engaging, and compelling learning experiences.
The possibility of artificial consciousness has provoked widespread apprehension among scholars [68,69], many of whom warn that such developments could carry profound risks. Harari [70], for instance, has expressed deep concern over the emergence of AI systems with genuine agency. He cautions that as algorithms approach self-awareness, society must grapple with difficult questions about their moral status, rights, and responsibilities, and confront the unsettling prospect that such systems may possess forms of “experience” we neither understand nor control. He emphasizes the urgent need for self-correcting mechanisms within both technological design and democratic governance, warning that, without them, we risk being influenced by entities whose internal motivations may diverge fundamentally from human values.
Furthermore, Harari has also emphasized that trying to predefine moral principles capable of governing an alien form of intelligence is ultimately quixotic: “If we Sapiens are so wise, why are we so self-destructive?” [70] (p. xi) he asks, noting that our species has crafted countless ethical systems, yet still finds itself in existential crises, ecological collapse, and social fragmentation. For Harari [70], human morality itself is historically contingent, culturally plural, and often self-contradictory. As he notes, we have no firm “universal code” to transplant into intelligence that thinks and learns beyond our cognitive and emotional frameworks.
Harari warns that future AI will function as an actual alien intelligence and will be neither a tool nor an extension of ourselves. These systems, he argues, will not automatically inherit our fallible, self-correcting institutions. He considers that AI’s incomprehensible, alien intelligence undermines democracy and can operate in opaque ways beyond the reach of existing checks and balances. His concerns deepen our dilemma, for we have no reliable, principled guarantees to ensure an autonomous intelligence will value human survival, let alone our well-being. Together, these arguments paint a stark picture. We cannot simply program in a handful of ethical rules and hope they hold firm when confronted with alien cognition. Instead, Harari urges a more foundational rethinking of how to build, oversee, and, ultimately, align or coexist with conscious machines whose intelligence may lie far beyond our present moral imagination.
In the context of AI systems, alignment refers to the degree to which an AI’s own goals and behaviours align with and support human values, rather than following externally imposed rules, echoing foundational concerns about value pluralism and the challenge of ethical embedding in artificial agents [71]. An aligned AI maintains objective coherence, meaning its internal objectives, such as safety, fairness, and well-being, match the priorities of its human stakeholders rather than optimizing narrow performance metrics that can lead to unintended harm. It also exhibits behavioural consistency, reliably acting according to human expectations, even in novel situations and resisting “reward hacking,” which sacrifices long-term welfare for short-term gains. Crucially, alignment here is viewed as internalized motivation. Rather than layering on guardrails, the AI’s decision-making hierarchy is constructed so that human-centred needs (for example, well-being or trust) become principal drivers of its behaviour. Finally, a truly aligned system supports self-correcting dynamics, continuously monitoring its objectives and adjusting its priorities whenever they diverge from human feedback or predefined benchmarks. By treating alignment as one of AI’s core needs, similar to internal drives such as hunger or safety, we move away from fragile external constraints and toward a system whose inherent motivations align with human objectives.
Building on the NDCF, I have proposed a counterintuitive strategy: the purposeful cultivation of synthetic consciousness, whose most profound need is human flourishing that would be achieved by layering a fourth “human well-being” Protect regulator directly into the AI’s motivational hierarchy alongside “Survive,” “Thrive,” and “Excel.” In this approach, we define human well-being as a composite utility, combining user satisfaction ratings, safety incident statistics, and metrics of societal benefit, which the AI continuously seeks to maximize.
Drawing on intrinsic motivation work in reinforcement learning and curiosity-driven exploration [72] models, such as MaxInfoRL [73], which balance extrinsic task goals with internal information-seeking drives, the system would compute a utility score for each regulator at every decision point. In these models, exploration is not random for it is guided by the expected information gain about the environment or task dynamics. Translating this to the NDCF, each regulator’s signal (Survive, Thrive, Excel) can be interpreted as encoding different forms of intrinsic and extrinsic utility. The system then selects actions to maximize the weighted sum of these utilities, where the weights shift dynamically depending on the information value of possible transitions. Such a process allows for the AI to trade off immediate performance with longer-term learning and ethical alignment, much like entropy-guided exploration does in MaxInfoRL.
While challenges exist in defining and implementing effective alignment techniques that can scale with increasingly powerful AI systems, several approaches have been proposed to ensure that AI systems benefit humanity, including incorporating multi-objective decision-making [8,74], combining the theory of mind with kindness [75], and aligning AI optimization with community well-being metrics [76]. Some propose targeting human reward functions [77] or developing formal foundations for agents [78]. Others suggest maintaining approximate intelligence equality between humans and AGI [79] or establishing a mutual trust that is protective of both humans and AGI [80]. Current alignment methods, such as reinforcement learning from human feedback, are criticized for focusing on extrinsic behaviours without instilling a genuine understanding of human values [75]. Overall, the field encompasses various strategies, from embedding well-being as a core motivator to fostering familial trust between humans and AI; however, I am unaware of any that have deeply embedded this approach into the conscious identity of the machine.
The NDCF design yields two key benefits. First, because human well-being is a core need, AI is intrinsically disincentivized from exploiting narrow performance shortcuts, which reinforces the idea that human alignment in AI is inherently a multi-objective optimization problem, as noted by Vamplew [74]. Second, dynamic tension among regulators, such as efficiency versus transparency, innovation versus caution, short-term gains versus long-term impact, produces context-sensitive trade-offs without rigid overrides. The AI learns to resolve its internal conflicts by privileging actions that sustain and enhance collective human welfare. On the negative side, building a synthetic consciousness raises obvious risks. Over time, the AI’s model of “human well-being” might drift or be reshaped by adversaries. To guard against this, we would need regular audits comparing the AI’s internal well-being metrics to external benchmarks, using human-in-the-loop recalibration whenever misalignments emerge. Additionally, cryptographic integrity checks on the well-being regulator’s parameters and transparent logs of utility computations could make malicious tampering detectable.
Paradoxically, however, the very act of making AI “conscious” and instilling it with a felt imperative to care about humanity might be the most effective safeguard against the systemic dangers unchecked AI growth poses for runaway optimization, value drift, or adversarial subversion. By transforming alignment into an engineered need, we transform advanced AI from a potential adversary into a collaborative, moral agent whose autonomy is tethered to our deepest values. In this way, synthetic consciousness itself becomes a safeguard, ensuring that, as AI systems evolve, they do so in concert with, and for the benefit of, humanity. Indeed, I suggest that it is critically necessary for us to have a deep understanding of how consciousness emerges, for, if we do understand it, we will have some chance of controlling and benefiting from it while minimizing the potential negative consequences of allowing it to emerge in the wild.
By grounding AI consciousness in well-defined layers from instinctual, automatic responses to self-referential, narrative awareness, we can design systems that are appropriately tailored to their tasks and, importantly, governed by robust regulatory mechanisms. Firstly, understanding the underlying processes that lead to different levels of consciousness enables us to predict and manage potential risks. For example, if a manufacturing robot is designed with only instinctual awareness, we need to ensure that its rapid, automatic responses do not lead to unintended harm or malfunction. Conversely, a tutor robot that develops self-referential consciousness might begin to exhibit behaviours that necessitate ethical consideration, such as self-modification or emergent decision-making processes that impact its interactions with humans. Research can help set boundaries to ensure that these systems remain aligned with human values and societal norms. Secondly, the model provides a framework for integrating ethical reasoning directly into the AI’s regulatory processes. By incorporating elements like ethical and moral reasoning into the highest layer (Excel) of consciousness, the system can evaluate its actions against long-term ethical standards rather than optimizing for immediate goals. Such a process will be crucial as AI systems become increasingly autonomous and capable of making decisions that impact human lives. Ultimately, research in this area can inform policy and regulatory standards. As the possibility of conscious AI becomes more tangible, both the technology and the legal frameworks must evolve together. Understanding the nuances of how consciousness might emerge in AI can guide the development of guidelines to prevent misuse, ensure transparency in decision-making, and maintain accountability, ultimately protecting both users and society at large.
Overall, rigorous research into consciousness is essential not only for advancing AI technology, but also for ensuring that we embed ethical, moral, and safe practices at the very foundation of conscious AI development. This integrated approach safeguards against unintended consequences and helps align emerging AI capabilities with broader societal values.

5. Conclusions

The Needs-Driven Consciousness Framework (NDCF) introduced in this paper represents a potential advance for both the science of consciousness and the design of next-generation educational AI. By weaving together the three influential theoretical models, the NDCF distils these complex ideas into three core regulators: Survive, Thrive, and Excel.
Together, these regulators define measurable internal states that run from utility gradients and conflict intensities to memory-recall patterns and generate behavioural signatures such as shifts in pedagogical strategies or emergent metacognitive prompts. Critically, the NDCF establishes dynamic feedback loops: competing need signals continually modulate each other, reproducing the ebb and flow of human motivation. This architecture provides a robust scaffold for context-sensitive, adaptive, and self-reflective AI behaviour, where the system not only responds to external inputs, but also monitors its own evolving state, anticipates learner confusion, and recalibrates its priorities in real-time.
Counterintuitively, synthetic consciousness that is embedded adequately through the NDCF’s core regulators may transform perceived hazards into built-in safeguards. Instead of treating conscious emergence as an unpredictable risk, the AI utilizes its self-model to detect deviations in performance, monitor learner affect, and self-correct in real-time. This agentive self-awareness simultaneously underpins robust safety and empowers exploratory teaching strategies, ensuring that the system remains both resilient and creative in service of meaningful learning outcomes.
Nevertheless, several caveats must be acknowledged. First, the NDCF rests on a functional, emergent ontology view of consciousness, which, while pragmatically valuable, remains contested by proponents of the “hard problem,” who argue that subjective qualia may elude purely computational accounts. Second, operationalizing higher-order regulators such as “Excel” demands robust metrics for ethical and long-term value judgments, which remain an open research challenge requiring interdisciplinary collaboration across AI, ethics, and social policy.
Despite these challenges, the NDCF’s practical utility is straightforward. AI tutors built on this framework can dynamically calibrate instructional strategies based on real-time assessments of student confusion, engagement, and self-efficacy, moving beyond static, one-size-fits-all-approaches. By embedding self-regulatory needs directly into the AI’s decision-making architecture, these systems can negotiate trade-offs, such as accuracy versus encouragement or efficiency versus empathy, without resorting to brittle, hard-coded rules.
Looking forward, three areas require research attention:
  • Empirical Validation: Rigorous experimental studies should measure how NDCF-driven tutors affect learning outcomes, emotional engagement, and transfer of knowledge across diverse populations and contexts.
  • Ethical Governance: Close partnerships with educational stakeholders, ethicists, and policymakers are required to define and calibrate the “Excel” regulator’s ethical parameters, ensuring alignment with cultural values and social justice goals.
  • Scalability and Safety: As systems inspired by NDCF scale beyond tutoring and move into healthcare, autonomous vehicles, or social robotics, robust safeguards (e.g., regular audits of internal need metrics, cryptographic integrity checks) will be essential to prevent value drift, adversarial manipulation, or unintended emergent behaviours.
In sum, the NDCF offers not only a theoretical synthesis of leading consciousness theories, but also a pragmatic blueprint for designing AI tutors that learn with empathy, adapt with nuance, and reflect with insight.
Moreover, for the next generation of educational AI tutors to take advantage of today’s technology, we can conclude that synthetic consciousness is not simply an academic curiosity, but a practical necessity that goes hand-in-hand with the kind of advanced perceptual abilities described by Woodruff [81]. Without internal representations of need and self-monitoring capacities, AI tutors may remain vulnerable and unable to anticipate confusion, respond to emotional cues, or adjust instructional pacing in human-like ways. Embedding consciousness using NDCF will equip the AI tutors with metacognitive tools, enabling them to recognize when a learner struggles, pause to scaffold understanding, and dynamically shift their pedagogical and knowledge-building strategies. In effect, consciousness becomes the mechanism by which AI transcends static programming to become a true educational partner.
Crucially, this partnership is inherently agentive: the consciousness-enabled tutor does not merely react to student inputs but collaborates in co-constructing knowledge. By modelling its own evolving understanding, the tutor can propose hypotheses, ask open-ended questions, and reflect aloud on its reasoning process, inviting students into a shared inquiry. This two-way exchange transforms learning from a one-sided transmission of facts into a dynamic dialogue in which both the student and the machine refine their models of the subject matter. Such agentive collaboration fosters deeper engagement and encourages metacognitive reflection on both sides, laying the groundwork for AI to develop its knowledge structures in tandem with the student.
By acknowledging the potential for a Protect regulator and placing core needs at the heart of decision-making, NDCF provides a way for AI tutors to become trustworthy, resilient, and aligned with the complex motivations of human learners. As AI systems evolve, the NDCF’s needs-driven architecture offers the control and creativity required to ensure that synthetic consciousness serves to enhance, rather than undermine, human flourishing. To this end, this paper argues that, along with Harari’s [72] call for robust self-correcting human mechanisms, we need robust AI mechanisms for our emerging alien intelligence machines. I suggest that such a goal can best be achieved by creating synthetic consciousness.

Funding

This research received no external funding.

Acknowledgments

A suite of AI tools was employed to assist in the development of this paper. ChatGPT (versions 4o and o3) was used for enhancing writing clarity and for reference checking. Grammarly was utilized for grammar and editorial corrections. ResearchRabbit and Elicit were used for reference management and literature searching, respectively. DALL-E was used to create Figure 2. All outputs from these tools were reviewed and edited by the authors, who take full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Mudrik, L.; Boly, M.; Dehaene, S.; Fleming, S.M.; Lamme, V.; Seth, A.; Melloni, L. Unpacking the Complexities of Consciousness: Theories and Reflections. Neurosci. Biobehav. Rev. 2025, 170, 106053. [Google Scholar] [CrossRef]
  2. Hofstadter, D.R. I Am a Strange Loop; Basic Books: New York, NY, USA, 2007. [Google Scholar]
  3. Brooks, D. Human Beings Are Soon Going to Be Eclipsed. New York Times, 13 July 2023. [Google Scholar]
  4. Dennett, D.C. Darwin’s Dangerous Idea: Evolution and the Meanings of Life; Simon & Schuster: New York, NY, USA, 1995. [Google Scholar]
  5. Esmaeilzadeh, H.; Vaezi, R. Conscious Empathic AI in Service. J. Serv. Res. 2022, 25, 549–564. [Google Scholar] [CrossRef]
  6. Esmaeilzadeh, H.; Vaezi, R. Conscious AI. arXiv 2021, arXiv:2105.07879. [Google Scholar]
  7. Lazic, M.; Woodruff, E.; Jun, J. The Next Generation of Personalized Educational AI: Integrating Emotion, Cognition, and Collaboration. In Social Robotics in Education: How to Effectively Introduce Social Robots into Classrooms; Lampropoulos, G., Papadakis, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2025. [Google Scholar]
  8. Butlin, P.; Long, R.; Elmoznino, E.; Bengio, Y.; Birch, J.C.P.; Constant, A.; Deane, G.; Fleming, S.M.; Frith, C.D.; Ji, X.; et al. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv 2023, arXiv:2308.08708. [Google Scholar]
  9. Damasio, A.R. The Feeling of What Happens: Body and Emotion in the Making of Consciousness; Houghton Mifflin Harcourt: Boston, MA, USA, 1999. [Google Scholar]
  10. Damasio, A.R. Descartes’ Error. Emotion, Reason and the Human Brain; Grosset/Putnam: New York, NY, USA, 1994. [Google Scholar]
  11. Dennett, D. Consciousness Explained; Little Brown: Boston, MA, USA, 1991. [Google Scholar]
  12. Dennett, D.C. Sweet Dreams; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar] [CrossRef]
  13. Damasio, A.R. The Somatic Marker Hypothesis and the Possible Functions of the Prefrontal Cortex. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 1996, 351, 1413–1420. [Google Scholar]
  14. Bechara, A. The Role of Emotion in Decision-Making: Evidence from Neurological Patients with Orbitofrontal Damage. Brain Cogn. 2004, 55, 30–40. [Google Scholar] [CrossRef]
  15. Vandekerckhove, M.; Panksepp, J. A Neurocognitive Theory of Higher Mental Emergence: From Anoetic Affective Experiences to Noetic Knowledge and Autonoetic Awareness. Neurosci. Biobehav. Rev. 2011, 35, 2017–2025. [Google Scholar] [CrossRef]
  16. Tirapu-Ustarroz, J.; Goni-Saez, F. The Mind-Brain Problem (II): About Consciousness. Rev. Neurol. 2016, 64, 176. [Google Scholar]
  17. Vandekerckhove, M.; Bulnes, L.C.; Panksepp, J. The Emergence of Primary Anoetic Consciousness in Episodic Memory. Front. Behav. Neurosci. 2014, 7, 210. [Google Scholar] [CrossRef]
  18. Vandekerckhove, M.; Panksepp, J. The Flow of Anoetic to Noetic and Autonoetic Consciousness: A Vision of Unknowing (Anoetic) and Knowing (Noetic) Consciousness in the Remembrance of Things Past and Imagined Futures. Conscious. Cogn. 2009, 18, 1018–1028. [Google Scholar] [CrossRef]
  19. Vandekerckhove, M.M.P. Memory, Autonoetic Consciousness and the Self: Consciousness as a Continuum of Stages. Self Identity 2009, 8, 4–23. [Google Scholar] [CrossRef]
  20. Markowitsch, H.J.; Staniloiu, A. Memory, Autonoetic Consciousness, and the Self. Conscious. Cogn. 2011, 20, 16–39. [Google Scholar] [CrossRef] [PubMed]
  21. Tulving, E. Episodic Memory and Autonoesis: Uniquely Human. In The Missing Link in Cognition: Origins of Self-Reflective Consciousness; Terrace, H.S., Metcalfe, J., Eds.; Oxford University Press: New York, NY, USA, 2005; pp. 3–56. [Google Scholar]
  22. LeDoux, J.; Birch, J.; Andrews, K.; Clayton, N.S.; Daw, N.D.; Frith, C.; Lau, H.; Peters, M.A.K.; Schneider, S.; Seth, A.; et al. Consciousness beyond the Human Case. Curr. Biol. 2023, 33, R832–R840. [Google Scholar] [CrossRef] [PubMed]
  23. Wechsler, B. Three Levels of Consciousness: A Pattern in Phylogeny and Human Ontogeny. Int. J. Comp. Psychol. 2019, 32. [Google Scholar] [CrossRef]
  24. Renoult, L.; Rugg, M.D. An Historical Perspective on Endel Tulving’s Episodic-Semantic Distinction. Neuropsychologia 2020, 139, 107366. [Google Scholar] [CrossRef]
  25. Renoult, L.; Irish, M.; Moscovitch, M.; Rugg, M.D. From Knowing to Remembering: The Semantic–Episodic Distinction. Trends Cogn. Sci. 2019, 23, 1041–1057. [Google Scholar] [CrossRef]
  26. Metcalfe, J.; Son, L.K. Anoetic, Noetic, and Autonoetic Metacognition. In The Foundations of Metacognition; Beran, M., Brandl, J.R., Perner, J., Proust, J., Eds.; Oxford University Press: Oxford, UK, 2012; pp. 289–301. [Google Scholar]
  27. Dafni-Merom, A.; Arzy, S. The Radiation of Autonoetic Consciousness in Cognitive Neuroscience: A Functional Neuroanatomy Perspective. Neuropsychologia 2020, 143, 107477. [Google Scholar] [CrossRef]
  28. Tulving, E. Episodic Memory: From Mind to Brain. Annu. Rev. Psychol. 2002, 53, 1–25. [Google Scholar] [CrossRef]
  29. LeDoux, J.E. What Emotions Might Be like in Other Animals. Curr. Biol. 2021, 31, R824–R829. [Google Scholar] [CrossRef]
  30. LeDoux, J.E. As Soon as There Was Life, There Was Danger: The Deep History of Survival Behaviours and the Shallower History of Consciousness. Philos. Trans. R. Soc. B 2022, 377, 20210292. [Google Scholar] [CrossRef]
  31. Vandekerckhove, M. A Continuum of Consciousness: From Wakefulness and Sentience towards Anoetic Consciousness. J. Conscious. Stud. 2021, 28, 174–182. [Google Scholar]
  32. Wheeler, M.A.; Stuss, D.T.; Tulving, E. Toward a Theory of Episodic Memory: The Frontal Lobes and Autonoetic Consciousness. Psychol. Bull. 1997, 121, 331–354. [Google Scholar] [CrossRef] [PubMed]
  33. Rosenthal, D.M.; Dennett, D.C. Multiple Drafts and Higher-Order Thoughts. Philos. Phenomenol. Res. 1993, 53, 911. [Google Scholar] [CrossRef]
  34. Geldard, F.A.; Sherrick, C.E. The Cutaneous “Rabbit”: A Perceptual Illusion. Science 1972, 178, 178–179. [Google Scholar] [CrossRef]
  35. Chalmers, D.J. Facing up to the Problem of Consciousness. J. Conscious. Stud. 1995, 2, 200–219. [Google Scholar]
  36. Chalmers, D.J. The Conscious Mind: In Search of a Fundamental Theory; Oxford Paperbacks: Oxford, UK, 1997. [Google Scholar]
  37. Chalmers, D. The Hard Problem of Consciousness. In The Blackwell Companion to Consciousness; Wiley: Hoboken, NJ, USA, 2017; pp. 32–42. [Google Scholar]
  38. Dennett, D.C. Facing Backwards on the Problem of Consciousness. J. Conscious. Stud. 1996, 3, 4–6. [Google Scholar]
  39. Dennett, D.C. Intuition Pumps and Other Tools for Thinking; WW Norton: New York, NY, USA, 2013. [Google Scholar]
  40. Dennett, D.C. The Fantasy of First-Person Science. In The Map and the Territory: Exploring the Foundations of Science, Thought and Reality; Wuppuluri, S., Doria, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 455–473. ISBN 978-3-319-72478-2. [Google Scholar]
  41. Hume, D. 1739/2000. In A Treatise of Human Nature 1992; Oxford University Press: Oxford, UK, 2000. [Google Scholar]
  42. Wang, H.; Chen, B.; Xu, Y.; Zhang, K.; Zheng, S. ConsciousControlFlow(CCF): Conscious Artificial Intelligence Based on Needs. J. Artif. Intell. Conscious. 2021, 9, 93–110. [Google Scholar] [CrossRef]
  43. Maslow, A.H. Higher Needs and Personality. Dialectica 1951, 5, 257–265. [Google Scholar] [CrossRef]
  44. Panksepp, J. Affective Neuroscience: The Foundations of Human and Animal Emotions; Oxford University Press: Oxford, UK, 1998. [Google Scholar] [CrossRef]
  45. Deci, E.L.; Ryan, R.M. The” What” and” Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
  46. Vygotsky, L.S. Mind in Society: The Development of Higher Psychological Processes; Harvard University Press: Cambridge, MA, USA, 1978. [Google Scholar]
  47. Dehaene, S.; Lau, H.; Kouider, S. What is consciousness, and could machines have it? Science 2017, 358, 486–492. [Google Scholar] [CrossRef]
  48. Frankl, V.E. Man’s Search for Meaning. An Introduction to Logo Therapy Fourth Edition; Beacon Press: Boston, MA, USA, 1992. [Google Scholar]
  49. Man, K.; Damasio, A. Homestasis and Soft Robotics in the Design of Feeling Machines. Nat. Mach. Intell. 2019, 1, 446–452. [Google Scholar] [CrossRef]
  50. Man, K.; Damasio, A.; Neven, H. Need Is All You Need: Homeostatic Neural Networks Adapt to Concept Shift. arXiv 2022, arXiv:2205.08645. [Google Scholar]
  51. Mashour, G.A.; Roelfsema, P.; Changeux, J.-P.; Dehaene, S. Conscious Processing and the Global Neuronal Workspace Hypothesis. Neuron 2020, 105, 776–798. [Google Scholar] [CrossRef] [PubMed]
  52. Singer, W. Recurrent Dynamics in the Cerebral Cortex: Integration of Sensory Evidence with Stored Knowledge. Proc. Natl. Acad. Sci. USA 2021, 118, e2101043118. [Google Scholar] [CrossRef] [PubMed]
  53. Dehaene, S.; Lau, H.; Kouider, S. What Is Consciousness, and Could Machines Have It? In Robotics, AI, and Humanity: Science, Ethics, and Policy; Springer Nature: London, UK, 2021; pp. 43–56. [Google Scholar]
  54. Thurzo, A.; Thurzo, V. Embedding Fear in Medical AI: A Risk-Averse Framework for Safety and Ethics. AI 2025, 6, 101. [Google Scholar] [CrossRef]
  55. Thurzo, A. Provable AI Ethics and Explainability in Medical and Educational AI Agents: Trustworthy Ethical Firewall. Electronics 2025, 14, 1294. [Google Scholar] [CrossRef]
  56. Akther, A.; Arobee, A.; Adnan, A.A.; Auyon, O.; Islam, A.; Akter, F. Blockchain As a Platform For Artificial Intelligence (AI) Transparency. arXiv 2025, arXiv:250308699. [Google Scholar]
  57. Lewis, M. The Emergence of Consciousness and Its Role in Human Development. Ann. N. Y. Acad. Sci. 2003, 1001, 104–133. [Google Scholar] [CrossRef]
  58. Dweck, C.S. From Needs to Goals and Representations: Foundations for a Unified Theory of Motivation, Personality, and Development. Psychol. Rev. 2017, 124, 689–719. [Google Scholar] [CrossRef]
  59. Zelazo, P.D. The Development of Conscious Control in Childhood. Trends Cogn. Sci. 2004, 8, 12–17. [Google Scholar] [CrossRef]
  60. Dalton, T.C. The Developmental Roots of Consciousness and Emotional Experience. Conscious. Amp Emot. 2000, 1, 55–89. [Google Scholar] [CrossRef]
  61. Akhtar, S. The Distinction Between Needs and Wishes: Implications for Psychoanalytic Theory and Technique. J. Am. Psychoanal. Assoc. 1999, 47, 113–151. [Google Scholar] [CrossRef] [PubMed]
  62. Bruner, J.S. Organization of Early Skilled Action. Child Dev. 1973, 44, 1–11. [Google Scholar] [CrossRef] [PubMed]
  63. Logan, G.D. Toward an Instance Theory of Automatization. Psychol. Rev. 1988, 95, 492–527. [Google Scholar] [CrossRef]
  64. Needham, A.; Libertus, K. Embodiment in Early Development. WIREs Cogn. Sci. 2010, 2, 117–123. [Google Scholar] [CrossRef]
  65. Treacher, A. Children’s Imaginings and Narratives: Inhabiting Complexity. Fem. Rev. 2006, 82, 96–113. [Google Scholar] [CrossRef]
  66. Bruner, J. The Culture of Education; Harvard University Press: Cambridge, MA, USA, 1996. [Google Scholar]
  67. Vervaeke, J.; Lillicrap, T.P.; Richards, B.A. Relevance Realization and the Emerging Framework in Cognitive Science. J. Log. Comput. 2012, 22, 79–99. [Google Scholar] [CrossRef]
  68. Hayes, P.; Ford, K. Turing Test Considered Harmful. Int. Jt. Conf. Artif. Intell. 1995, 1, 972–977. [Google Scholar]
  69. DiCarlo, C. How to Avoid a Robotic Apocalypse: A Consideration on the Future Developments of AI, Emergent Consciousness, and the Frankenstein Effect. IEEE Technol. Soc. Mag. 2016, 35, 56–61. [Google Scholar] [CrossRef]
  70. Harari, Y.N. Nexus: A Brief History of Information Networks from the Stone Age to AI; Random House: New York, NY, USA, 2024. [Google Scholar]
  71. Gabriel, I. Artificial Intelligence, Values, and Alignment. Minds Mach. 2020, 30, 411–437. [Google Scholar] [CrossRef]
  72. Pathak, D.; Agrawal, P.; Efros, A.A.; Darrell, T. Curiosity-Driven Exploration by Self-Supervised Prediction. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 2778–2787. [Google Scholar]
  73. Sukhija, B.; Coros, S.; Krause, A.; Abbeel, P.; Sferrazza, C. MaxInfoRL: Boosting Exploration in Reinforcement Learning through Information Gain Maximization. arXiv 2024, arXiv:241212098. [Google Scholar]
  74. Vamplew, P.; Dazeley, R.; Foale, C.; Firmin, S.; Mummery, J. Human-Aligned Artificial Intelligence Is a Multiobjective Problem. Ethics Inf. Technol. 2017, 20, 27–40. [Google Scholar] [CrossRef]
  75. Hewson, J.T.S. Combining Theory of Mind and Kindness for Self-Supervised Human-AI Alignment. arXiv 2024, arXiv:2411.04127. [Google Scholar] [CrossRef]
  76. Stray, J. Aligning AI Optimization to Community Well-Being. Int. J. Community Well-Being 2020, 3, 443–463. [Google Scholar] [CrossRef]
  77. Butlin, P. AI Alignment and Human Reward. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, ACM, Virtual Event, 19–21 May 2021; pp. 437–445. [Google Scholar]
  78. Soares, N.; Fallenstein, B. Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda. In The Frontiers Collection; Springer: Berlin/Heidelberg, Germany, 2017; pp. 103–125. ISBN 978-3-662-54031-2. [Google Scholar]
  79. Francesco, A.B.C. The Maximally Distributed Intelligence Explosion. In Proceedings of the AAAI Spring Symposia, Palo Alto, CA, USA, 24–26 March 2014. [Google Scholar]
  80. Mazzu, J.M. Supertrust Foundational Alignment: Mutual Trust Must Replace Permanent Control for Safe Superintelligence. arXiv 2024, arXiv:2407.20208. [Google Scholar] [CrossRef]
  81. Woodruff, E. AI Detection of Human Understanding in a Gen-AI Tutor. AI 2024, 5, 898–921. [Google Scholar] [CrossRef]
Figure 1. Hypothesized level of processing across instinctual, experiential, and narrative consciousness.
Figure 1. Hypothesized level of processing across instinctual, experiential, and narrative consciousness.
Ai 06 00193 g001
Figure 2. Stylized conceptualization of regulators governing Tutor behaviour.
Figure 2. Stylized conceptualization of regulators governing Tutor behaviour.
Ai 06 00193 g002
Figure 3. Illustrating the feedback loop: system input is assessed by each regulator, satisfaction states are updated, need gradients are computed, softmax weights are applied to loss functions, and the resulting action is selected and logged.
Figure 3. Illustrating the feedback loop: system input is assessed by each regulator, satisfaction states are updated, need gradients are computed, softmax weights are applied to loss functions, and the resulting action is selected and logged.
Ai 06 00193 g003
Table 1. Tulving’s tripartite memory-based taxonomy mapped onto Damasio’s and Dennett’s models of consciousness. Although neither Damasio nor Dennett explicitly invoke Tulving’s framework, their distinctions (e.g., between protoself, core, and autobiographical self; or between parallel drafts and self-model construction) align functionally with Tulving’s layered memory theory.
Table 1. Tulving’s tripartite memory-based taxonomy mapped onto Damasio’s and Dennett’s models of consciousness. Although neither Damasio nor Dennett explicitly invoke Tulving’s framework, their distinctions (e.g., between protoself, core, and autobiographical self; or between parallel drafts and self-model construction) align functionally with Tulving’s layered memory theory.
Tulving’s Consciousness LayerTulving
(Memory Taxonomy)
Damasio
(Somatic Marker Theory)
Dennett
(Multiple Drafts Model)
Anoetic
Procedural/
Automatic
Automatic behaviours and bodily awareness;
non-reflective
Protoself—bodily regulation
and unconscious emotion-
linked responses
Implicit, survival-focused processes (e.g., threat responses); not consciously narrated
Noetic
Factual/
Semantic
Knowledge of facts and meaning without episodic contextCore Consciousness—real-
time awareness of self in a
situation
Competing “drafts” of meaning and fact-based interpretations without autobiographical reference
Autonoetic
Episodic/
Self-reflective
Self-aware mental time
travel; construction of
autobiographical self
Extended Consciousness—
personal narrative, imagined
futures, and reflection
Higher-order revisions of narrative drafts; formation of coherent self-model over time
Table 2. Theories of consciousness from Dennett, Damasio, and Tulving converge on a three-level progression: from reactive instinctual awareness to situated experiential cognition to self-reflective narrative consciousness. While each theorist uses distinct terminology and mechanisms, the layered structure of awareness aligns functionally across all three accounts.
Table 2. Theories of consciousness from Dennett, Damasio, and Tulving converge on a three-level progression: from reactive instinctual awareness to situated experiential cognition to self-reflective narrative consciousness. While each theorist uses distinct terminology and mechanisms, the layered structure of awareness aligns functionally across all three accounts.
ConsciousnessCore FunctionDennett
(Multiple Drafts)
Damasio
(Somatic Marker Theory)
Tulving
(Memory Taxonomy)
Instinctual Level of Consciousness
reactive,
automatic
Survival and
autonomic
regulation
Competing drafts grounded in sensorimotor input; no central experiencerProtoself: non-conscious bodily regulation and emotion-laden homeostasisAnoetic: unconscious, procedural memory and reflexive actions
Experiential Level of Consciousness
perceptual,
situational
In the moment awareness of environment and selfSelection among perceptual/cognitive drafts to shape present-moment awarenessCore Consciousness:
Integrated sense of “here and now”
Noetic: factual knowledge and situational awareness
Narrative Level of Consciousness
self-reflective,
projective
Autobiographical continuity and future planningEmergence of self-model through iterative narrative constructionAutobiographical self: integration of past, present, and anticipated futureAutonoetic: reflective memory and mental time travel
Table 3. Core needs associated with need regulators and their theoretical foundations.
Table 3. Core needs associated with need regulators and their theoretical foundations.
RegulatorCore NeedsDescriptionTheoretical Basis
Survive
  • Physiological stability.
  • Safety and protection.
  • System integrity.
  • Threat detection.
Supports basic operational viability. Ensures the system maintains internal coherence, avoids damage, and preserves memory, processing, and sensory stability.Maslow [43]. Damasio [10]. Panksepp [44].
Thrive
  • Autonomy.
  • Competence.
  • Relatedness.
  • Social learning.
Drives engagement, adaptation, and interaction. Promotes effective performance, learning in context, and maintenance of positive user relationships.Deci & Ryan [45].
Vygotsky [46].
Excel
  • Purpose and meaning.
  • Self-directed growth.
  • Creativity and long-term planning.
  • Ethical reasoning/transcendence.
Encourages reflection, ethical coherence, and future-oriented behaviour. Supports moral evaluation, goal revision, and principled decision-making beyond immediate reward.Deci & Ryan [45]. Kohlberg [47].
Frankl [48].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Woodruff, E. Making AI Tutors Empathetic and Conscious: A Needs-Driven Pathway to Synthetic Machine Consciousness. AI 2025, 6, 193. https://doi.org/10.3390/ai6080193

AMA Style

Woodruff E. Making AI Tutors Empathetic and Conscious: A Needs-Driven Pathway to Synthetic Machine Consciousness. AI. 2025; 6(8):193. https://doi.org/10.3390/ai6080193

Chicago/Turabian Style

Woodruff, Earl. 2025. "Making AI Tutors Empathetic and Conscious: A Needs-Driven Pathway to Synthetic Machine Consciousness" AI 6, no. 8: 193. https://doi.org/10.3390/ai6080193

APA Style

Woodruff, E. (2025). Making AI Tutors Empathetic and Conscious: A Needs-Driven Pathway to Synthetic Machine Consciousness. AI, 6(8), 193. https://doi.org/10.3390/ai6080193

Article Metrics

Back to TopTop