Previous Article in Journal
Research Assessment and the Hollowing out of the Economics Discipline in UK Universities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cognitive Systems and Artificial Consciousness: What It Is Like to Be a Bat Is Not the Point

by
Javier Arévalo-Royo
*,
Juan-Ignacio Latorre-Biel
and
Francisco-Javier Flor-Montalvo
Department of Mechanical Engineering, Public University of Navarra, Av de Tarazona s/n, 31500 Tudela, Navarre, Spain
*
Author to whom correspondence should be addressed.
Metrics 2025, 2(3), 11; https://doi.org/10.3390/metrics2030011
Submission received: 8 June 2025 / Revised: 9 July 2025 / Accepted: 14 July 2025 / Published: 17 July 2025

Abstract

A longstanding ambiguity surrounds the operationalization of consciousness in artificial systems, complicated by the philosophical and cultural weight of subjective experience. This work examines whether cognitive architectures may be designed to support a functionally explicit form of artificial consciousness, focusing not on the replication of phenomenology, but rather on measurable, technically realizable introspective mechanisms. Drawing on a critical review of foundational and contemporary literature, this study articulates a conceptual and methodological shift: from investigating the experiential perspective of agents (“what it is like to be a bat”) to analyzing the informational, self-regulatory, and adaptive structures that enable purposive behavior. The approach combines theoretical analysis with a comparative review of major cognitive architectures, evaluating their capacity to implement access consciousness and internal monitoring. Findings indicate that several state-of-the-art systems already display core features associated with functional consciousness—such as self-explanation, context-sensitive adaptation, and performance evaluation—without invoking subjective states. These results support the thesis that cognitive engineering may progress more effectively by focusing on operational definitions of consciousness that are amenable to implementation and empirical validation. In conclusion, this perspective enables the development of artificial agents capable of autonomous reasoning and self-assessment, grounded in technical clarity rather than speculative constructs.

1. Introduction

This theoretical study lays the foundations for addressing a practical gap in artificial intelligence (AI) research: the absence of clear, operational criteria for embedding functional consciousness mechanisms—such as self-monitoring and explainability—in artificial cognitive systems. By clarifying these criteria, it becomes possible to develop agents capable of supervising and adapting their behavior in real-world environments. Furthermore, defining these components enhances transparency and auditability, thereby strengthening public trust in autonomous technologies.
The paper proceeds as follows. Section 2 introduces the theoretical foundations and systematic review methods. Section 3 delivers results, covering conceptual distinctions, functional taxonomy, and the evaluation of cognitive architectures. Section 4 discusses engineering implications and applications, and Section 5 concludes with main findings, limitations, and future directions. Therefore, this study addresses the following research questions: What operational criteria and functional modules can be defined for implementing access consciousness and introspective monitoring in artificial systems? Which current cognitive architectures implement key features of artificial functional consciousness? How are access consciousness and monitoring mechanisms currently applied in practical sectors? How can access consciousness be operationalized and distinguished from phenomenal consciousness in artificial agents? What conceptual and technical limitations affect the current formalization and empirical validation of functional artificial consciousness?
As a foundational and encompassing work, Nagel [1], in his critique of materialist reductionism, constructs a philosophical argument that draws on conceptual illustrations and analytic categories such as subjective awareness, reductive explanation, and perspectival standpoint. Through this framework, he identifies structural boundaries that constrain the capacity of physicalist theories to engage meaningfully with consciousness. His central claim highlights a recurring failure in attempts at objective reduction: namely, their disregard for the inherently first-person, non-transferable nature of conscious states.
Rooted in Husserlian phenomenology and anticipating what Chalmers would later articulate as the “hard problem” [2], has left a lasting doctrinal imprint. It has implicitly constrained the range of possible technical-computational developments aimed at functionally modeling artificial consciousness (AC). While many agree that phenomenal consciousness, due to its irreducibility and resistance to operational formalization, remains beyond the reach of current computational models, this does not imply that efforts to construct systems with consciousness-related cognitive capacities should be abandoned.
As a consequence of this conceptual entanglement, the term “consciousness” has, in some contexts, drifted toward an ambiguous and occasionally sensationalized meaning, which has hindered the formulation of clear scientific and technical approaches within the domain of cognitive architectures (CA).
Addressing this impasse, Block [3] introduced a fundamental distinction between two types of consciousness. Phenomenal consciousness (P-consciousness) refers to the qualitative character of experience, or what it feels like to undergo mental states, as originally highlighted by Nagel. Access consciousness (A-consciousness), on the other hand, refers to the functional accessibility of mental content for use in reasoning without invoking phenomenological assumptions, planning, action control, and verbal reporting.
This second type, access consciousness, is amenable to formal representation and computational implementation. Models developed by Baars [4], and earlier by Newell and Simon [5], have laid the groundwork for such approaches. In parallel, a further mode of consciousness must be considered—self-consciousness—which entails the system’s recognition of itself as a distinct entity. Additionally, monitoring or metacognitive consciousness involves a system’s capacity to represent and track its own internal states. This latter form, often discussed under the concept of higher-order thought, presupposes the existence of system states that take other internal states as their content [6].
Using this taxonomy as a starting point, the present inquiry considers whether contemporary artificial cognitive systems can integrate access consciousness mechanisms alongside monitoring functions comparable to metacognition. Such an integration could yield functional value without invoking subjective experience as a necessary condition.

2. Materials and Methods

This study is conducted within a framework that combines logical scrutiny of definitional structures and critical engagement with high-impact scientific literature and the use of functional analogies drawn from the historical trajectory of cognitive technologies. The foundational works by Husserl, Nagel, Chalmers, Baars, Newell, and Simon constitute the established conceptual background that precedes any systematic literature review. These canonical sources are considered indispensable for framing the theoretical context and are not products of the database-driven search protocols detailed below.
The methodological process unfolds across three interconnected phases, each reinforcing the coherence of the final conclusions:
  • The first phase involves the construction of a theoretical foundation through the analysis of both conceptual and operational definitions of consciousness.
  • The second phase consists of a comparative technical analysis of cognitive architectures that feature mechanisms functionally associated with consciousness. This includes systematic literature searches conducted through SCOPUS and the Web of Science, particularly its SCIE index, in order to evaluate the relative impact and maturity of each proposed system within the broader research landscape.
  • The third phase addresses the formulation of a set of functional specifications intended to guide the implementation of access consciousness and introspective monitoring capacities in autonomous computational agents. This final stage also involves an evaluative review of cognitive systems that already exhibit, at least partially, the characteristics defined by these specifications, with the aim of identifying practical pathways toward integration.
This study employs a structured systematic review protocol guided by key PRISMA principles—namely transparent search strategies and explicit reporting of inclusion/exclusion criteria. We conducted searches in SCOPUS and Web of Science for publications since 2000 that describe cognitive architectures with access consciousness or monitoring mechanisms. A clear screening process was followed: title/abstract review, full-text assessment by two independent evaluators, and consensus resolution for discrepancies. We also mapped our data-extraction process to a descriptive coding framework.

3. Results

In this work, the theoretical framework is integrated in this section because the principal outcomes of the study are conceptual rather than empirical. Since the contribution consists in developing and refining theoretical constructs—rather than reporting experimental or practical results—presenting these analyses as results reflects the genuine nature of the research objectives.

3.1. Background

The systematic investigation of consciousness as a structured and intentional phenomenon finds one of its most coherent philosophical foundations in the transcendental phenomenology of Edmund Husserl. In his exploration of conscious experience, Husserl advances the idea that every act of consciousness is inherently directed toward something beyond itself, a property he defines as intentionality. This directedness, rather than being incidental, is treated as the essential feature of conscious states [7]. Yet, despite its descriptive depth, Husserl’s approach abstains from addressing how such categories might be implemented in concrete systems, leaving its influence on engineering domains minimal and largely theoretical in nature.
In contrast, the latter half of the twentieth century marks a decisive shift toward computational and representational models of cognition. From the 1950s and 1960s onward, the emergence of cognitive science as a multidisciplinary enterprise introduces a view of consciousness rooted in the symbolic processing of information. Alan Turing’s foundational work [8] establishes the conceptual groundwork for formalizing mental functions through computation. Building on this, Herbert Simon and Allen Newell [9] develop a paradigm in which the mind is understood as a physical-symbolic system, one that lends itself to algorithmic modelling. John McCarthy, in parallel, articulates a logic-based approach to reasoning, where mental operations are expressed through symbolic manipulation. This framework gives rise to what is now known as symbolic AI, where rational planning and internal representation are cast within the structure of formal, general-purpose languages [10].
At the same time, cognitive psychology gradually replaces behaviorist accounts with models that posit internal, information-bearing constructs such as memory, attention, and planning. Within this evolving landscape, Bernard Baars introduces the Global Workspace Theory [11], a model in which consciousness is conceived as a functional platform enabling the distribution of key information across various subsystems. This model not only provides a formal mechanism for coordinating cognition but also offers a structure that can, in principle, be implemented in artificial systems. It foreshadows later developments in AI and cognitive architectures aimed at capturing similar integrative dynamics.
These two traditions, the phenomenological and the computational, offer markedly different trajectories. The former provides a rich epistemological account of experience, though one that resists translation into technical practice. The latter, while limited in its reach regarding subjective experience, offers operational clarity and reproducibility. The present study aligns with the second tradition, grounded in the conviction that consciousness, when approached as a functional construct, can be decomposed into implementable specifications without invoking phenomenological assumptions [12].

3.2. Computational Cognitive Science

In their foundational contribution to both computational cognitive science and theoretical computer science, Newell and Simon [1] argue in Computer Science as Empirical Inquiry: Symbols and Search [5] that computer science should be understood as an empirical discipline. Just as psychology investigates mental phenomena and biology explores living systems, computer science, they propose, is devoted to studying symbol-processing systems. Central to their thesis is the notion of the physical symbol system, defined as any structure capable of manipulating symbols according to formal rules in order to achieve specific goals. Within this framework, problem-solving is reconceptualized as heuristic search, and both human reasoning and machine intelligence are formalized as operations over structured symbolic state spaces. This perspective, later supported by Lenat, asserts that intelligent behavior—whether biological or artificial—can be functionally captured through symbolic manipulation and search algorithms [13].
Their article outlines three foundational claims:
  • Computer science is an empirical science concerned with the behavior of symbolic systems [14].
  • Computer programs, particularly those simulating cognition, serve as empirical theories and can be subjected to experimental validation [15].
  • The study of human thought and the design of artificial intelligence share a common formal and methodological core: the structured manipulation of symbols via search mechanisms [16].
These principles have exerted a profound influence on the symbolic paradigm in AI and on the formation of a computational theory of problem solving that spans both engineering and cognitive psychology. While symbolic AI, built upon rule systems, formal logic, and discrete representations, models the reflective and structured nature of human reasoning with high fidelity, subsymbolic AI draws on the strengths of continuous computation. It employs weight adjustments, vector-based representations, and non-symbolic numerical processes that resonate more directly with the mathematical substrate of digital hardware. Today, hybrid approaches that integrate both paradigms represent the most promising direction [17]. Still, just as aeronautical engineering no longer requires imitation of the flapping wings of bats to achieve flight, computational models of cognition are not bound to biologically mimic human reasoning in order to exhibit comparable or even superior performance in tasks involving intellectual function [18].

3.3. Access Consciousness vs. Phenomenal Consciousness

According to Block [3], access consciousness is defined by three core functional characteristics:
  • Inferential promiscuity: the information involved can be used across multiple reasoning tasks.
  • Availability for rational action: the content supports purposeful, goal-oriented behavior.
  • Availability for language: the states can be transmitted to communicative subsystems for reporting or articulation.
These features are not merely theoretical. In cognitive engineering, they correspond to specific implementable components, such as working memory modules, attentional filters, executive planning units, and language generation systems. In contrast, phenomenal consciousness does not lend itself to any such implementation. As Chalmers has observed [2], this dimension of experience remains the unresolved core of the so-called “hard problem” of consciousness. Its resistance to reduction into physico-functional terms renders it a valid philosophical topic but one with limited, if any, applicability in engineering contexts. For an intuitive analogy, consider a computer that reports its status—CPU load, memory usage, error logs—capabilities clearly aligned with access consciousness, yet it does not ‘experience’ anything. It can monitor, reason, adapt, and explain its state, but it does not ‘feel’ as a conscious being would—phenomenal consciousness is absent.

3.4. Functional Characteristics of Access and Monitoring Consciousness

Based on the analysis conducted, access consciousness and monitoring consciousness can therefore be implemented using the following general functional modules (Table 1):
This type of architecture allows the system to carry out error detection, explain its own actions (XAI) [25], and adjust its strategies when needed. Subjective experience is not a prerequisite. What it does require are representational and control structures, which can be effectively provided by knowledge graphs as a suitable technological framework.

3.5. Major Cognitive Architectures Exhibiting Characteristics of Consciousness

Table 2 lists the twenty most influential cognitive architectures currently active [26], as measured by the number of publications and citations indexed in SCOPUS over the past five years. Arranged in alphabetical order, the table specifies which of the previously discussed functionalities (labelled 1 through 6) are implemented in each system. The identification of these features is based on a review of open-access scientific publications and the official documentation available for each project
The comparative review reveals that several cognitive architectures implement the full range of access consciousness modules, while others incorporate only a subset, such as working memory and action selection. Systems with comprehensive introspective mechanisms correspond more closely to the proposed framework of functional consciousness. The practical mapping of these modules is further clarified by a typical cognitive cycle: the agent filters sensory input, stores salient data, models its internal state, reasons to select actions, monitors execution, and generates explanatory reports. This sequence reflects the integrated and iterative nature of the functional components.
Figure 1 presents a layered model of cognitive architecture, where each level encapsulates a fundamental set of functions. Starting from perception and data acquisition, information is progressively processed and analyzed, supporting higher-order reasoning and decision-making, and culminating in interaction, communication, and coordination. This modular scheme provides a conceptual foundation for structuring functional consciousness in artificial agents. The ongoing work focuses on developing its functional design using concrete technologies that enable the implementation of these modules in real-world systems.

4. Discussion

Access consciousness is already a functional component in certain advanced systems. Models of self-confidence, such as FaMSeC [47], show that internal states can be quantified, decisions can be assessed, and the system can produce operational introspective evaluations.
Phenomenal consciousness, whether in the formulation offered by Chalmers [2] or in the quantum interpretation proposed by D’Ariano and Faggin [48], relies on ontological assumptions that lack direct functional translation. Similarly, Husserl’s phenomenology, despite its conceptual richness and focus on intentionality, does not provide formal mechanisms suitable for technical implementation. The point is not that phenomenal consciousness is invalid, but rather that its constructions remain largely theoretical [49], while in engineering contexts, what ultimately matters is the capacity to build something functional and applicable.
It is necessary to emphasize that the architectures analyzed here, while functionally simulating certain aspects of consciousness, do not possess subjective experience or genuine phenomenality. These systems operate through formal mechanisms of representation, monitoring, and adaptation, but lack any form of “feeling” or awareness. The distinction between implementing functional modules and experiencing conscious states remains absolute; the artificial agent is not sentient but merely executes predefined operations according to its design.

4.1. Science Does Not Require Absolute Epistemological Validity, and Engineering Does Not Imitate: It Transforms

As Popper observed, scientific practice is grounded in functional models whose value does not rest on absolute truth but on their capacity to generate predictions, enable control, and prove useful within specific domains and contexts [50].
In the realm of engineering, countless cases illustrate that its purpose is not to replicate nature but to transform it. Early attempts at flight involved ornithopters with flapping wings, inspired by the anatomy of birds. However, the development of modern aviation relied on propellers and turbines, setting aside the biological mechanism entirely. What mattered was not the imitation of “flapping” but the achievement of flight itself. In this shift, function took precedence over mimicry [51].
Griffin and Galambos showed that bats navigate by means of acoustic echolocation. When Watson-Watt developed radar in the 1930s, the technology relied not on ultrasound but on electromagnetic microwaves. It does not simulate the bat’s perceptual experience. Yet it enables spatial orientation far beyond the bat’s natural range. The goal was not to recreate what the bat feels, but to build a system that works—and in some respects, works better [52].

4.2. Prejudices and Biases Concerning the Perception of Consciousness in AI

The current understanding of artificial consciousness (AC) remains closely tied to a cultural imaginary that, over centuries, has projected deep-seated fears and ethical uncertainties onto human-like creations. These representations, drawn from mythology and literature, have contributed to an ambivalent and often negative perception of AI when linked to the notion of consciousness. From the Golem to HAL 9000, the idea of an artificially conscious being has carried symbolic weight, distorting the term “consciousness” and transforming it from a technical construct into a cultural taboo [53]. In contrast, access consciousness provides a concept that is operational, quantifiable, and technically definable.
It is important to recognize that such narratives, despite their cultural impact, are symbolic in nature and do not necessarily reflect the current state of scientific and technological development. Today, cognitive engineering is oriented toward the construction of systems designed for functional effectiveness and operational utility, not for the replication of human inner experience. Addressing these long-standing biases requires a clear distinction between metaphorical discourse and the actual capabilities offered by present-day technologies. Doing so encourages a more objective and informed perspective on what AC can realistically achieve [54].

4.3. Approach to the Implementation of AI Consciousness

Contemporary cognitive engineering is making steady progress in incorporating access consciousness and introspective monitoring capabilities into artificial systems. These efforts concentrate on building functional mechanisms that support autonomy and adaptability, without attempting to emulate human subjective experience [55]. The focus remains on operational effectiveness, grounded in architectures that allow the system to process internal and external information in a context-sensitive manner.
Such developments indicate that the functional dimensions of consciousness can be formalized and implemented, provided that the emphasis remains on usability and performance rather than phenomenology. In this context, the construct of access consciousness can be expressed as the operator C A which, at each instant t , extracts the informational subset effectively usable by the system from its sensory streams, working memory, introspective state, and active goals:
C A t = Π ! O t , M t , S t , G t = { x Ω t σ x , θ t τ }
  • O t : vector of sensory observations at time t .
  • M t : contents of working memory.
  • S t : agent’s introspective state.
  • G t : set of active goals.
  • Ω t : domain of available representations.
  • σ x , θ t : attention function parameterised by θ t .
  • τ : minimal accessibility threshold.
  • Π : projection operator that filters and normalises the informational space.
Conversely, monitoring consciousness can be modelled by the operator C M , which is responsible for evaluating the congruence between the actual and predicted internal states, producing the adjustment signal ϕ t that feeds back into the architecture:
C M t = κ ! S t , S t ^ , a t , E t = ϕ t = Γ ! S t ^ S t , E t
  • S t : observed internal state.
  • S t ^ : predicted internal state after executing action a t .
  • a t : action performed at time t .
  • E t : exogenous feedback signal (e.g., success, error, reward).
  • Γ : control operator that transforms the discrepancy S t ^ S t and the signal E t into the corrective adjustment ϕ t .
  • κ : functional composition integrating prediction, measurement, and evaluation.

4.4. Taxonomy of Applications for Cognitive Systems Endowed with Artificial Access and Monitoring Consciousness

Artificial consciousness (AC) has moved beyond the realm of abstract theory and speculation, finding practical expression across a variety of applied domains. These real-world implementations show that it is possible to embed advanced cognitive functions in artificial systems without invoking phenomenal experience. Five primary sectors can be identified where this integration is already underway:
  • Health Sciences: applications include AI-supported diagnostic tools, automated clinical documentation, intelligent monitoring during recovery, emotional interaction in mental health contexts, and personalized assistance in geriatric care [56].
  • Industry and Manufacturing: systems are being deployed for predictive maintenance, optimization of production workflows, efficient management of resources, automated quality inspection, real-time supervision of assembly lines, and route planning for logistics [57].
  • Education and Training: examples include adaptive tutoring platforms, detection of attention and emotional states in learning environments, dynamic content adjustment, personalized evaluation methods, and tailored support for students with specific educational needs [58].
  • Security and Defense: implementations range from surveillance systems based on behavioral analysis to cyber threat detection, predictive risk modelling, intelligent access control, and automated emergency response coordination [59].
  • Environment and Sustainability: systems are used to monitor air and water conditions, manage natural resources, detect natural hazards in advance, enhance energy use, and observe ecological systems [60].
While this study centers on the conceptual and architectural aspects of functional consciousness, it is acknowledged that its practical implementation entails ethical and societal risks, particularly concerning transparency and responsibility. Furthermore, the functionalist approach itself faces important theoretical limitations, especially regarding its inability to address subjective experience. Alternative frameworks, such as phenomenological and enactivist theories, contend that conscious experience cannot be fully captured by operational modules alone. Recognizing these dimensions is crucial for any comprehensive discussion, although their detailed analysis exceeds the current scope.

5. Conclusions

Cognitive engineering does not require an understanding of what it feels like to be a bat. Its primary goal is not to replicate experiential states but rather to design systems capable of perceiving, reasoning, monitoring, and making autonomous decisions effectively and with measurable outcomes. Access consciousness, defined as a functional structure that makes mental content available for inference, control, and planning, serves precisely this purpose. It endows artificial systems with introspective, supervisory, and adaptive capabilities, avoiding reliance on phenomenological assumptions.
Such a perspective inevitably prompts ethical considerations—questions arising from the attribution of moral status or the assignment of responsibilities within sociocultural contexts. While these reflections are undoubtedly significant, they more appropriately belong to the realm of conceptual analysis than to technical or empirical research. Consistent with Wittgenstein’s assertion that “ethics and aesthetics are one and the same,” and that “ethics is transcendental,” both dimensions lie outside the scope of scientific inquiry. Therefore, this discussion deliberately refrains from normative judgments, maintaining its focus on formal and atemporal mechanisms that might underpin artificial consciousness. Broader implications regarding values and meanings accompanying progress in this domain are thus deferred for future reflection and debate.
Nonetheless, it would be methodologically restrictive to disregard all efforts aimed at operationalizing phenomenal consciousness. Recent developments, such as OpenCog Hyperon, explore—in a structured and explicitly non-metaphysical way—aspects of internal organization reminiscent of self-experiential states. Hierarchical attention mechanisms, self-referential components, and dynamic semantic regulatory systems suggest the possibility of conceiving conscious experience not as an inaccessible subjective property, but as a relational function tied to internal coherence, potentially implementable within complex architectures.
Currently, however, the most promising and applicable results derive from systems integrating access consciousness with metacognitive monitoring. Such systems evaluate their own performance, adapt strategies, and justify actions. From an engineering standpoint, functional consciousness already meets the essential criteria of measurability, traceability, and control. Thus, the fundamental question shifts from “what it is like to be a bat” toward “what being a bat functionally entails”: identifying mechanisms for effective orientation, internal representation of environments, filtering and prioritizing sensory inputs, and behavioral adaptation. While the philosophical exploration of phenomenal consciousness retains its validity, practical advances depend on artificial consciousness capable of accessing, evaluating, and acting, thereby redefining the contemporary frontier of AI research.

6. Future Research Directions

The framework developed in this study provides a systematic foundation for analyzing functional consciousness in artificial agents; however, several critical areas merit further investigation. Future research should explicitly address the limitations imposed by current technological capabilities and prevailing conceptual models, pushing beyond theoretical boundaries toward direct empirical validation. Among the essential unresolved challenges are assessing the scalability of metacognitive mechanisms, elucidating the emergence of adaptive behaviors in unstructured environments, and establishing transparent, rigorous criteria for evaluating artificial introspection. Addressing these aspects will help bridge the gap between formal theoretical models and practical implementations, thus clarifying both the epistemic boundaries and practical potentials of artificial consciousness.

Author Contributions

Conceptualization, J.A.-R., J.-I.L.-B. and F.-J.F.-M.; methodology, J.A.-R.; software, J.A.-R.; validation, J.-I.L.-B. and F.-J.F.-M.; formal analysis, J.A.-R.; investigation, J.A.-R.; resources, J.A.-R.; data curation, J.A.-R.; writing—original draft preparation, J.A.-R.; writing—review and editing, J.A.-R., J.-I.L.-B. and F.-J.F.-M.; visualization, J.A.-R.; supervision, J.-I.L.-B. and F.-J.F.-M.; project administration, J.A.-R., J.-I.L.-B. and F.-J.F.-M.; funding acquisition: not applicable. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nagel, T. What Is It Like to Be a Bat? Philos. Rev. 1974, 83, 435–450. [Google Scholar] [CrossRef]
  2. Chalmers, D.J. Facing Up to the Problem of Consciousness. In The Character of Consciousness; Oxford University Press: Oxford, UK, 2010; pp. 3–34. Available online: https://academic.oup.com/book/6996/chapter/151305365 (accessed on 10 May 2025).
  3. Block, N. On a Confusion about a Function of Consciousness. Behav. Brain Sci. 1995, 18, 227–247. [Google Scholar] [CrossRef]
  4. Baars, B.J. A Cognitive Theory of Consciousness; Cambridge University Press: Cambridge, UK, 1993; ISBN 0521427436/978 0521427432. [Google Scholar]
  5. Newell, A.; Simon, H.A. Computer Science as Empirical Inquiry: Symbols and Search. Commun. ACM 1976, 19, 113–126. [Google Scholar] [CrossRef]
  6. Brown, R.; Lau, H.; LeDoux, J.E. Understanding the Higher-Order Approach to Consciousness. Trends Cogn. Sci. 2019, 23, 754–768. [Google Scholar] [CrossRef]
  7. Demmin, H.S. A Phenomenological Theory of Occurrent Thought and Husserl’s Intentionality. Husserl Stud. 2025, 41, 197–220. [Google Scholar] [CrossRef]
  8. Turing, A.M. Computing Machinery and Intelligence. Mind 1950, LIX, 433–460. [Google Scholar] [CrossRef]
  9. Simon, H.A.; Newell, A. Human Problem Solving: The State of the Theory in 1970. Am. Psychol. 1971, 26, 145–159. [Google Scholar] [CrossRef]
  10. McCarthy, J. Artificial Intelligence, Logic and Formalizing Common Sense. In Philosophical Logic and Artificial Intelligence; Springer: Dordrecht, The Netherlands, 1989; pp. 161–190. Available online: http://link.springer.com/10.1007/978-94-009-2448-2_6 (accessed on 12 May 2025).
  11. Baars, B.J. In the Theater of Consciousness: The Workspace of the Mind; Oxford University Press: Oxford, UK, 1997; ISBN 9780195102659. [Google Scholar]
  12. Ding, Z.; Wei, X.; Xu, Y. Survey of Consciousness Theory from Computational Perspective. arXiv 2023, arXiv:2309.10063. [Google Scholar] [CrossRef]
  13. Lenat, D.B. Theory Formation by Heuristic Search: The Nature of Heuristics II: Background and Examples. Artif. Intell. 1983, 21, 31–59. [Google Scholar] [CrossRef]
  14. Arévalo-Royo, J. Computer Science: The World as Information and Representation; Amazon Digital Services LLC—KDP: Seattle, WA, USA, 2024; ISBN 9798340452597. [Google Scholar]
  15. Anderson, S.D.; Hart, D.M.; Westbrook, D.L.; Cohen, P.R.; Carlson, A. Tools for Empirically Analyzing AI Programs. In Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistic, Fort Lauderdale, FL, USA, 4–7 January 1995; pp. 35–41. Available online: https://proceedings.mlr.press/r0/anderson95a.html (accessed on 2 June 2025).
  16. Santoro, A.; Lampinen, A.; Mathewson, K.W.; Lillicrap, T.; Raposo, D.; Contributions, E. Symbolic Behaviour in Artificial Intelligence. arXiv 2021, arXiv:2102.03406. [Google Scholar] [CrossRef]
  17. McCarthy, J. Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I. Commun. ACM 1960, 3, 184–195. [Google Scholar] [CrossRef]
  18. Alam, M.; Groth, P.; Hitzler, P.; Paulheim, H.; Sack, H.; Tresp, V. CSSA’20: Workshop on Combining Symbolic and Sub-Symbolic Methods and Their Applications. Proc. Int. Conf. Inf. Knowl. Manag. 2020, 3523–3524. [Google Scholar] [CrossRef]
  19. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. Adv. Neural Inf. Process. Syst. 2017, 5999–6009. [Google Scholar] [CrossRef]
  20. Baars, B.J. Global Workspace Theory of Consciousness: Toward a Cognitive Neuroscience of Human Experience. In Progress in Brain Research; Elsevier: Amsterdam, The Netherlands, 2005; Volume 150, pp. 45–53. [Google Scholar] [CrossRef]
  21. Wu, J.; Chen, Z.; Deng, J.; Sabour, S.; Meng, H.; Huang, M. COKE: A Cognitive Knowledge Graph for Machine Theory of Mind. arXiv 2023, arXiv:2305.05390. [Google Scholar] [CrossRef]
  22. Luo, L.; Zhao, Z.; Gong, C.; Haffari, G.; Pan, S. Graph-Constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models. arXiv 2024, arXiv:2410.13080. [Google Scholar] [CrossRef]
  23. Wang, X.; Chen, L.; Ban, T.; Usman, M.; Guan, Y.; Liu, S.; Wu, T.; Chen, H. Knowledge Graph Quality Control: A Survey. Fundam. Res. 2021, 1, 607–626. [Google Scholar] [CrossRef]
  24. Zuo, K.; Jiang, Y.; Mo, F.; Lio, P. KG4Diagnosis: A Hierarchical Multi-Agent LLM Framework with Knowledge Graph Enhancement for Medical Diagnosis. arXiv 2024, arXiv:2412.16833. [Google Scholar] [CrossRef]
  25. Bilal, A.; Ebert, D.; Lin, B. LLMs for Explainable AI: A Comprehensive Survey. arXiv 2025, arXiv:2504.00125. [Google Scholar] [CrossRef]
  26. Langley, P.; Laird, J.E.; Rogers, S. Cognitive Architectures: Research Issues and Challenges. Cogn. Syst. Res. 2009, 10, 141–160. [Google Scholar] [CrossRef]
  27. Albus, J.; Huang, H.-M.; Messina, E.; Murphy, K.; Juberts, M.; Lacaze, A.; Balakirsky, S.; Shneier, M.; Hong, T.; Scott, H.; et al. 4D/RCS Version 2.0: A Reference Model Architecture for Unmanned Vehicle Systems. In NIST Interagency/Internal Report (NISTIR); NIST: Gaithersburg, MD, USA, 2002; p. 6910. [Google Scholar] [CrossRef]
  28. Anderson, J.R.; Lebiere, C.J. The Atomic Components of Thought; Psychology Press: New York, NY, USA, 2014; ISBN 9781317778318. [Google Scholar]
  29. Sheikhlar, A.; Thórisson, K.R. Causal Generalization via Goal-Driven Analogy; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2024; Volume 14951, pp. 165–175. [Google Scholar] [CrossRef]
  30. Bello, P.; Bridewell, W. Self-Control on the Path toward Artificial Moral Agency. Cogn. Syst. Res. 2025, 89, 101316. [Google Scholar] [CrossRef]
  31. Vinanzi, S.; Cangelosi, A. CASPER: Cognitive Architecture for Social Perception and Engagement in Robots. Int. J. Soc. Robot. 2024, 1–19. [Google Scholar] [CrossRef]
  32. Gobet, F.; Lane, P.C.R. Learning in the CHREST Cognitive Architecture. In Encyclopedia of the Sciences of Learning; Springer: Boston, MA, USA, 2012; pp. 1920–1923. Available online: http://link.springer.com/10.1007/978-1-4419-1428-6_1732 (accessed on 12 May 2025).
  33. Sun, R.; Slusarz, P.; Terry, C. The Interaction of the Explicit and the Implicit in Skill Learning: A Dual-Process Approach. Psychol. Rev. 2005, 112, 159–192. [Google Scholar] [CrossRef]
  34. Freire, I.T.; Guerrero-Rosado, O.; Amil, A.F.; Verschure, P.F.M.J. Socially Adaptive Cognitive Architecture for Human-Robot Collaboration in Industrial Settings. Front. Robot. AI 2024, 11, 1248646. [Google Scholar] [CrossRef]
  35. Kieras, D.E.; Meyer, D.E. An Overview of the EPIC Architecture for Cognition and Performance With Application to Human-Computer Interaction. Hum. Comput. Interact. 1997, 12, 391–438. [Google Scholar] [CrossRef]
  36. Bjorck, J.; Castañeda, F.; Cherniadev, N.; Da, X.; Ding, R.; Fan, L.J.; Fang, Y.; Fox, D.; Hu, F.; Huang, S.; et al. GR00T N1: An Open Foundation Model for Generalist Humanoid Robots. arXiv 2025, arXiv:2503.14734. [Google Scholar] [CrossRef]
  37. Cui, Y.; Ahmad, S.; Hawkins, J. Continuous Online Sequence Learning with an Unsupervised Neural Network Model. Neural Comput. 2016, 28, 2474–2504. [Google Scholar] [CrossRef] [PubMed]
  38. Choi, D.; Langley, P. Evolution of the Icarus Cognitive Architecture. Cogn. Syst. Res. 2018, 48, 25–38. [Google Scholar] [CrossRef]
  39. Franklin, S.; Madl, T.; Strain, S.; Faghihi, U.; Dong, D.; Kugele, S.; Snaider, J.; Agrawal, P.; Chen, S. A LIDA Cognitive Model Tutorial. Biol. Inspired Cogn. Archit. 2016, 16, 105–130. [Google Scholar] [CrossRef]
  40. Wang, P.; Li, X.; Hammer, P. Self in NARS, an AGI System. Front. Robot. AI 2018, 5, 20. [Google Scholar] [CrossRef]
  41. Wang, P.; Li, X.; Hammer, P. Self-Awareness and Self-Control in NARS; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10414, pp. 33–43. Available online: https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2018.00020/full (accessed on 18 June 2025).
  42. Goertzel, B.; Bogdanov, V.; Duncan, M.; Duong, D.; Goertzel, Z.; Horlings, J.; Ikle’, M.; Meredith, L.G.; Potapov, A.; de Senna, A.L.; et al. OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond. arXiv 2023, arXiv:2310.18318. [Google Scholar] [CrossRef]
  43. Ingrand, F.F.; Georgeff, M.P.; Rao, A.S. An Architecture for Real-Time Reasoning and System Control. IEEE Expert 1992, 7, 34–44. [Google Scholar] [CrossRef] [PubMed]
  44. Rosenbloom, P.S.; Demski, A.; Ustun, V. The Sigma Cognitive Architecture and System: Towards Functionally Elegant Grand Unification. J. Artif. Gen. Intell. 2016, 7, 1–103. [Google Scholar] [CrossRef]
  45. Laird, J.E. The Soar Cognitive Architecture; The MIT Press: Cambridge, UK, 2012; ISBN 9780262301145. [Google Scholar]
  46. Eliasmith, C.; Stewart, T.C.; Choo, X.; Bekolay, T.; DeWolf, T.; Tang, C.; Rasmussen, D. A Large-Scale Model of the Functioning Brain. Science 2012, 338, 1202–1205. [Google Scholar] [CrossRef] [PubMed]
  47. Xu, J.; Yang, Y.; Fang, H.; Liu, H.; Zhang, W. FAMSeC: A Few-Shot-Sample-Based General AI-Generated Image Detection Method. arXiv 2024, arXiv:2410.13156. [Google Scholar] [CrossRef]
  48. D’Ariano, G.M.; Faggin, F. Hard Problem and Free Will: An Information-Theoretical Approach. In Artificial Intelligence Versus Natural Intelligence; Springer International Publishing: Cham, Switzerland, 2022; pp. 145–192. [Google Scholar] [CrossRef]
  49. Haikonen, P.O.A. Consciousness and Sentient Robots. Int. J. Mach. Conscious. 2013, 5, 11–26. [Google Scholar] [CrossRef]
  50. Boland, L.A. Scientific Thinking without Scientific Method: Two Views of Popper. In New Directions in Economic Methodology; Backhouse, R.E., Ed.; Routledge: London, UK, 1994; pp. 154–172. [Google Scholar] [CrossRef]
  51. Han, J.; Hui, Z.; Tian, F.; Chen, G. Review on Bio-Inspired Flight Systems and Bionic Aerodynamics. Chin. J. Aeronaut. 2021, 34, 170–186. [Google Scholar] [CrossRef]
  52. Griffin, D.R. Echolocation by Blind Men, Bats and Radar. Science 1944, 100, 589–590. [Google Scholar] [CrossRef]
  53. Duffy, B.R. Fundamental Issues in Affective Intelligent Social Machines. Open Artif. Intell. J. 2008, 2, 21–34. [Google Scholar] [CrossRef]
  54. Möck, L.A. Prediction Promises: Towards a Metaphorology of Artificial Intelligence. J. Aesthet. Phenom. 2022, 9, 119–139. [Google Scholar] [CrossRef]
  55. Albarracin, M.; Hipólito, I.; Tremblay, S.E.; Fox, J.G.; René, G.; Friston, K.; Ramstead, M.J.D. Designing Explainable Artificial Intelligence with Active Inference: A Framework for Transparent Introspection and Decision-Making. In Communications in Computer and Information Science; Springer: Cham, Switzerland, 2024; Volume 1915, pp. 123–144. Available online: https://link.springer.com/10.1007/978-3-031-47958-8_9 (accessed on 12 June 2025).
  56. Renn, B.N.; Schurr, M.; Zaslavsky, O.; Pratap, A. Artificial Intelligence: An Interprofessional Perspective on Implications for Geriatric Mental Health Research and Care. Front. Psychiatry 2021, 12, 734909. [Google Scholar] [CrossRef]
  57. Arévalo-Royo, J.; Flor-Montalvo, F.J.; Latorre-Biel, J.I.; Tino-Ramos, R.; Martínez-Cámara, E.; Blanco-Fernández, J. AI Algorithms in the Agrifood Industry: Application Potential in the Spanish Agrifood Context. Appl. Sci. 2025, 15, 2096. [Google Scholar] [CrossRef]
  58. Susilo, T. The Role of Artificial Intelligence in Personalizing Learning for Each Student. J. Int. Lingua Technol. 2024, 3, 229–242. [Google Scholar] [CrossRef]
  59. Sharma, S.K. AI-Enhanced Cyber Threat Detection and Response Systems. Shodh Sagar J. Artif. Intell. Mach. Learn. 2024, 1, 43–48. [Google Scholar] [CrossRef]
  60. Arévalo-Royo, J.; Flor-Montalvo, F.J.; Latorre-Biel, J.I.; Martínez-Cámara, E.; Blanco-Fernández, J. Cognitive Systems for the Energy Efficiency Industry. Energies 2024, 17, 1860. [Google Scholar] [CrossRef]
Figure 1. Layered structure of a proposed cognitive architecture.
Figure 1. Layered structure of a proposed cognitive architecture.
Metrics 02 00011 g001
Table 1. Monitoring consciousness general functional modules.
Table 1. Monitoring consciousness general functional modules.
ModuleOperational Function
  • Selective attention [19]
Information filtering
2.
Working memory [20]
Temporary storage of representations
3.
Introspective representation [21]
Internal modeling of the agent’s own state
4.
Reasoning system [22]
Inference based on acquired knowledge
5.
Execution monitor [23]
Performance and consistency evaluation
6.
Reporting system [24]
Articulation of diagnostics and decisions
Table 2. Twenty most influential cognitive architectures.
Table 2. Twenty most influential cognitive architectures.
NameDescriptionFunctionRef
4D-RCSSupports robotic planning via spatial-temporal layered control hierarchies.1,2,3,4,5,6[27]
ACT-RModels modular human cognition with declarative and procedural memory buffers.2,4[28]
AERADynamically rewrites cognitive rules through introspective self-monitoring and evolution.1,3,4,5[29]
ARCADIACombines reactive control with hierarchical deliberative capabilities, integrating attention mechanisms.1,2,3,4,5[30]
CASPEREnables perspective-taking and goal inference in human–robot collaborative tasks.1,2,4[31]
CHRESTEncodes hierarchical chunks for attention-constrained learning and pattern recognition.1,2,4[32]
CLARIONSimulates implicit–explicit knowledge interaction in cognitive and meta-cognitive domains.2,4[33]
DAC-HRCAdapts robot behavior to shared human goals in industrial collaboration.2,4,5[34]
EPICModels multitasking through perceptual-motor channels and central cognitive processors.1,2,5[35]
GR00TPlans and executes fine-grained robotic actions from visual-linguistic instructions.1,2,4,5,6[36]
HTMEncodes and predicts sequences using sparse distributed representations in neocortical fashion.4[37]
ICARUSHierarchically organizes concepts and skills for deliberative agent control.2,4,5[38]
LIDAImplements attention, episodic memory, and conscious decision cycles in autonomous agents.1,2,4,5[39]
MERLIN2Combines symbolic planning and emergent control for autonomous manipulation.2,4,5,6[40]
NARSPerforms adaptive reasoning under uncertainty with introspective capabilities.1,2,3,4,5,6[41]
OpenCog HyperonIntegrates symbolic and subsymbolic learning in an atom-based attention-driven graph.1,2,3,4,5,6[42]
PRSExecutes real-time decision-making using procedural plans and intention filtering.2,4,5[43]
SigmaUnifies reasoning and probabilistic modeling via factor graphs in cognitive modules.2,4[44]
SoarCombines long-term learning and problem-solving through chunk-based memory.2,4,5,6[45]
SpaunImplements biologically plausible working memory and action generation via neural simulation.1,2,3,4,5,6[46]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arévalo-Royo, J.; Latorre-Biel, J.-I.; Flor-Montalvo, F.-J. Cognitive Systems and Artificial Consciousness: What It Is Like to Be a Bat Is Not the Point. Metrics 2025, 2, 11. https://doi.org/10.3390/metrics2030011

AMA Style

Arévalo-Royo J, Latorre-Biel J-I, Flor-Montalvo F-J. Cognitive Systems and Artificial Consciousness: What It Is Like to Be a Bat Is Not the Point. Metrics. 2025; 2(3):11. https://doi.org/10.3390/metrics2030011

Chicago/Turabian Style

Arévalo-Royo, Javier, Juan-Ignacio Latorre-Biel, and Francisco-Javier Flor-Montalvo. 2025. "Cognitive Systems and Artificial Consciousness: What It Is Like to Be a Bat Is Not the Point" Metrics 2, no. 3: 11. https://doi.org/10.3390/metrics2030011

APA Style

Arévalo-Royo, J., Latorre-Biel, J.-I., & Flor-Montalvo, F.-J. (2025). Cognitive Systems and Artificial Consciousness: What It Is Like to Be a Bat Is Not the Point. Metrics, 2(3), 11. https://doi.org/10.3390/metrics2030011

Article Metrics

Back to TopTop