Next Article in Journal
The Quest of the Absolute: Spinoza and Sartre
Next Article in Special Issue
When the Ghost Emerges from the Machine: Limits of Semantic Decoding from Complete Microstate Knowledge
Previous Article in Journal
Reflections, Reflection, Refraction
Previous Article in Special Issue
The Frame Survival Model of Conscious Continuity: A Theoretical Framework for Subjective Experience in a Branching Universe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What Artificial Intelligence May Be Missing—And Why It Is Unlikely to Attain It Under Current Paradigms

Strategic AI Group, Czech Radio, 12099 Praha, Czech Republic
Philosophies 2026, 11(1), 20; https://doi.org/10.3390/philosophies11010020
Submission received: 4 November 2025 / Revised: 30 January 2026 / Accepted: 5 February 2026 / Published: 10 February 2026

Abstract

Contemporary artificial intelligence (AI) achieves remarkable results in data processing, text generation, and the simulation of human cognition. However, it appears to lack key characteristics typically associated with living systems—consciousness, autonomous motivation, and genuine understanding of the world. This article critically examines the possible ontological divide between simulated intelligence and lived experience, using the metaphor of the motorcycle and the horse to illustrate how technological progress may obscure deeper principles of life and mind. Drawing on philosophical concepts such as abduction, tacit knowledge, phenomenal consciousness, and autopoiesis, the paper argues that current approaches to developing Artificial General Intelligence (AGI) may overlook organizational principles whose role in biological systems remains only partially understood. Methodologically, it employs a comparative ontological analysis grounded in philosophy of mind, cognitive science, systems theory, and theoretical biology, supported by contemporary literature on consciousness and biological autonomy. The article calls for a new paradigm that integrates these perspectives—one that asks not only “how to build smarter machines,” but also “what intelligence, life, and consciousness may fundamentally be,” acknowledging that their relation to computability remains an open question.

1. Introduction: The Motorcycle and the Horse—A Metaphor for Overlooked Principles

I arrived at the conference on a motorcycle—fast, efficient, and powerful. It surpassed the horse in every measurable way—except one: the spark of life. A horse is born, grows, learns, fears, rejoices, and wills. A motorcycle requires a factory, fuel, and maintenance. It is unable to build itself, learn autonomously, or make self-originating decisions.
In much the same way, today’s artificial intelligence outperforms humans in data processing speed, memory capacity, and text synthesis. However, it fundamentally lacks consciousness, motivation, and autonomy. If we focus solely on performance, there is a risk of overlooking the principles that make human intelligence truly alive.
This metaphor reveals a fundamental distinction between functional imitation and authentic intelligence. The metaphor is not intended as an ontological proof, but as an illustration of the distinction between current computational systems and living organisms, whose internal organization may involve principles that remain unknown to us. While current AI systems achieve impressive results across many tasks, they lack the essential qualities of living beings: spontaneity, autonomous motivation, and genuine understanding of the world—not merely its simulation. The core philosophical thesis of this article is simple: simulation is not experience, and no degree of simulation can fully bridge an ontological gap.
It is important to clarify at the outset that this article does not argue for the absolute impossibility of artificial intelligence ever attaining consciousness. Rather, it highlights that current AI architectures are grounded entirely in computational processes—the manipulation of symbols or numerical representations according to algorithmic rules. It remains conceivable that the human mind relies on processes not fully reducible to computation, whether these involve as-yet unexplained aspects of quantum physics, emergent phenomena in biological systems, or other forms of material organization. If consciousness or autonomous motivation were to depend on such non-computational mechanisms, then present AI systems—built upon known physical laws and algorithmic computability—could not achieve genuine general intelligence, regardless of how far they are scaled.

2. Methodology: Beyond Algorithms—Abduction, Tacit Knowledge, and the Limits of Simulation

This section outlines the conceptual framework adopted in the present analysis. It combines philosophical argumentation with comparative ontology, integrating concepts from philosophy of mind, cognitive science, systems theory, and theoretical biology. These are critically evaluated against the constraints and capabilities of current AI architectures, with an emphasis on identifying ontological limitations that cannot be resolved by incremental technical improvements.
In The Myth of Artificial Intelligence, Erik J. Larson warns against the illusion of progress [1]. He argues that current AI systems cannot precisely replicate the essence of human intelligence—abduction, the intuitive leap that allows us to form hypotheses without complete data [2]. AI does not know what it does not know—and it cannot ask, because it has no awareness of ignorance. It lacks epistemic self-awareness. Worse still, AI systems’ successes may hinder deeper inquiry by creating the false impression that we are close to solving intelligence.
Charles Sanders Peirce, the founder of abductive reasoning, described it as the logic of creativity—the ability to “guess” the best explanation from available data [2]. Michael Polanyi later emphasized that much of human knowledge is tacit—unspoken, intuitive, and impossible to fully formalize [3]. Polanyi’s notion of tacit knowledge refers to embodied, pre-reflective, context-sensitive skill—forms of knowing that are enacted through lived experience and cannot be fully articulated or formalized. By contrast, the opacity of contemporary neural networks is a mechanistic issue of interpretability: their internal representations are difficult to analyze, but this does not constitute a form of tacit understanding. Although the two phenomena are sometimes compared because both resist explicit description, they differ fundamentally in origin, function, and epistemic status. Human tacit knowledge arises from embodied engagement with the world, whereas neural network opacity results from architectural complexity and high-dimensional parameter spaces.
Although contemporary AI systems can output numerical confidence scores or probability estimates, these do not constitute epistemic self-awareness [4]. Such values reflect statistical properties of model outputs, not an internally accessible sense of knowing or not knowing. A system may assign high confidence to an incorrect answer or low confidence to a correct one, yet it has no awareness of this discrepancy. Confidence scores therefore do not amount to meta-cognition or recognition of ignorance; they are computational artifacts rather than indicators of an inner epistemic perspective.
Larson’s critique strikes at the heart of AI research. Abduction—the capacity to generate plausible hypotheses from incomplete information—requires a creative logical leap that current algorithms are unable to perform. Deduction follows rules, induction finds patterns, but abduction invents new possibilities from silent knowledge. It is this capacity that enables scientific breakthroughs, artistic originality, and everyday decision-making under uncertainty.
Machine learning systems model patterns in data, but they cannot explain them. They do not generate unpredictable hypotheses, and when they do, they cannot assess their relevance. They have no goals—only loss functions to optimize. They simulate logic—but they do not think, feel, or intend. As Daniel Dennett noted, “Computers only do what we understand well enough to teach them” [5]. Even if this statement seems outdated in light of recent generative AI advances, its essence remains valid.
Larson argues that the myth of AI “has deeply harmful consequences when taken seriously, because it undermines science” [1]. The current pursuit of AGI relies on refining transformer architectures, adding agentic systems, chains of thought, and memory modules. These are sophisticated simulations of human behavior—but not its principles. They are motorcycles, not horses: fast, efficient, and powerful—but something essential is missing.

3. Discussion: Consciousness, Autopoiesis, and the Ontological Divide

Consciousness probably does not appear to emerge merely from computational complexity. It is ontologically distinct. Thomas Nagel famously asked, “What is it like to be a bat?”—pointing to the irreducible nature of subjective experience, or qualia [6].
David Chalmers framed the “hard problem of consciousness” as the challenge of explaining why subjective experience exists at all [7]. AI systems such as large language models can generate texts that describe emotions, but they do not experience what they describe. As John Searle illustrated in his Chinese Room thought experiment, functional simulation of understanding is not the same as genuine understanding [8]. A computer manipulating Chinese symbols may produce coherent output, but it does not comprehend the language. Today’s AI models operate similarly—manipulating numerical representations of tokens to produce human-readable text, but without internal grasp of meaning.
Recent theoretical syntheses reinforce this distinction. Seth and Bayne’s survey of contemporary theories of consciousness underscores that no prevailing account treats consciousness as reducible to a mere “feature list” implementable in code; rather, consciousness is seen as an integrated, holistic property of living systems [9]. This aligns with the view presented here: simulation of conscious behavior is not equivalent to conscious experience.
Table 1 summarizes the key structural and functional differences between living systems and artificial constructs. While machines rely on external design, energy, and control, living organisms generate their own structure, motivation, and adaptation from within. This contrast highlights the ontological gap between simulation and genuine autonomy—a gap that current AI architectures have still failed to bridge.
Living systems possess the ability to self-replicate, self-regulate, and generate intention. AI, in its current form, lacks internal states, motivation, and autonomous origin. It reacts but does not experience it. It functions cognitively, but it does not live.
This is where the concept of autopoiesis becomes decisive. Maturana and Varela originally defined autopoiesis as the self-production of a system that maintains and regenerates its own organization [10]. Moreno and Mossio expanded on this, arguing that biological autonomy entails a causal closure of constraints, whereby the organism’s regulatory structures are generated and maintained internally [11]. This form of organization is categorically absent in artificial systems.
Recent analyses suggest that “no current AI systems are conscious,” yet also claim that “there are no obvious technical barriers to implementing certain features of consciousness artificially” [12]. However, Kauffman and Roli caution that the living world is not reducible to a theorem: its open-ended creativity cannot be fully captured by formal models [13]. From this perspective, the absence of consciousness in AI is not a temporary technical gap but a consequence of the fundamentally different causal architectures of living and artificial systems.
Philosophical analysis also supports the idea that “consciousness is the initiator of motivation” [14]. Without authentic consciousness, present AI cannot develop true autonomous goals. It can follow programmed objectives or mimic motivated behavior, but it cannot spontaneously generate its own intentions. Froese and Taguchi highlight that without a mechanism for intrinsic meaning-making, any goal structure in AI remains externally imposed and semantically inert [15].
Consciousness is probably not just another module to be added to an architecture. It is a foundational property of living beings. If AI lacks it, then it is not alive—and may never be, unless built on radically new principles of organization.
It is important to emphasize that the argument developed here concerns the organizational principles of today’s fully computable AI architectures. If future artificial systems were to achieve genuine autopoiesis—understood as self-production, self-maintenance, and the internal generation of regulatory constraints—then the ontological distinction drawn in this section would need to be reconsidered. Such a development would imply a shift from heteropoietic, externally sustained architectures toward systems capable of generating their own organization from within. Whether this is conceptually or technically possible remains an open question, but the present analysis is limited to current computational systems, which do not exhibit these properties.

4. Unifying Principle: What Connects AI and Machine Production

The motorcycle cannot build itself. It requires a factory, machines, plans, and materials. AI likewise depends on data, infrastructure, and human guidance. Life needs none of these—it replicates, grows, and adapts on its own. This suggests the existence of a principle that current science may not yet understand or may be ignoring.
This parallel between AI and industrial production is not accidental. Both rely fundamentally on external organization and the supply of energy and information. Living systems, by contrast, are autopoietic—they create and sustain themselves through their own internal processes [10,11]. This capacity for self-organization and self-creation is radically different from the heteropoietic nature of current AI. Figure 1 illustrates the key differences between living and artificial systems.
Humberto Maturana and Francisco Varela described living systems as entities whose organization exceeds the sum of their components [10]. They are not merely systems that perform tasks—they are systems that become, evolving from within. Present AI systems, by contrast, originate entirely from external processes; they never arise from themselves.
Kauffman and Roli emphasize that living systems are characterized by endless adjacent possible states, meaning they can continually generate novel functions and structures beyond any pre-specified state space [13]. This property resists algorithmic prediction and simulation, undermining the assumption that scaling computational complexity will inevitably yield consciousness.
It is possible that the missing principle is internal organization. It might be something akin to “biological intentionality,” or it might be consciousness itself [9,13]. Until we understand this principle, we will continue building better motorcycles—hoping one day they will run and feel like horses.
The prevailing reductionist approach to AI presumes that real intelligence will emerge from increasing complexity and computational power [15]. Yet this view overlooks the possibility that consciousness—and true intelligence—may require qualitatively distinct principles of matter and information organization [9,13,16]. Today’s AI systems are built upon vast arrays of silicon-based logic structures, trained on immense corpora of multimodal data. The blinking LEDs of data centers may evoke the illusion of thought—but in principle, the same function could be replicated by an astronomical number of gears and levers. Such a (practically unbuildable) mechanical apparatus could converse with us just like today’s chatbots. But could a sufficiently vast system of gears actually think? Could it be conscious? And if so—how many gears would be enough? Are we perhaps missing something fundamental? Figure 2 highlights the illusion of life often subjectively attributed to electronic AI systems.

5. Implications for the Future of AI

This analysis does not imply that current AI is not useful. Motorcycles are useful even though they are not horses; in fact, they fulfill certain functions significantly better. Present AI can solve many practical problems without being conscious or alive—even at levels inaccessible to humans. A simple calculator already surpasses human capacity in numerical precision and speed. However, if our goal is to create truly intelligent partners, we must reconsider our fundamental assumptions.
Large Language Model (LLM) AI systems simulate human conversation with artificial yet highly convincing fluency. They create an illusion of understanding that does not actually exist, amplified by our innate tendency to anthropomorphize anything that behaves in a human-like manner.
In addition to this general tendency toward anthropomorphization, contemporary AI systems often employ deliberately human-like design elements—such as natural language interfaces, conversational turn-taking, or simulated emotional cues—which further reinforce the illusion of understanding. These features can lead users to attribute intelligence or agency where none exists. The argument developed in this manuscript, however, does not concern the appearance or behavioral resemblance of artificial systems to humans. Rather, it focuses on the underlying organizational principles that distinguish living, autopoietic systems from externally constructed computational architectures. No degree of anthropomorphic design can substitute for the absence of such intrinsic organization.
As Luciano Floridi observed, “AI is not about machines being like us. It is about how the world changes when parts of reality begin to act like us” [17].
Recent debates about whether systems like ChatGPT (version GPT-5.2) are conscious highlight the urgency of this distinction. If we cannot separate sophisticated simulation from authentic consciousness, we risk both overestimating current systems and underestimating the complexity of true intelligence. Table 2 summarizes the conceptual limitations of current AI systems.
The real risk is not that AI will become conscious, but that we will cease to distinguish between simulation and experience. This has ethical consequences: we might attribute moral status, rights, or decision-making authority to entities that are in fact unconscious, while simultaneously underestimating the complexity and fragility of actual conscious life forms. Froese and Taguchi [15] caution that such misattribution could distort the way we value meaning-making processes, both in humans and in artificial agents.
It is possible that we need a radically different approach—not merely scaling current architectures, but making a qualitative leap toward new principles. Perhaps we must first understand how life emerges from inorganic matter before we can create truly living intelligence.

6. Conclusions: Simulation Is Not Experience

Despite the remarkable progress in artificial intelligence, a fundamental ontological gap remains between simulated cognition and lived experience. The metaphor of the motorcycle and the horse illustrates this divide: while machines can outperform biological organisms in speed and efficiency, they lack the intrinsic qualities of life—self-generation, autonomous motivation, and phenomenal consciousness.
This article has argued that current AI paradigms, grounded in external design and optimization, cannot replicate the internal organization and meaning-making processes characteristic of living systems. The absence of autopoiesis and subjective experience in artificial constructs suggests that intelligence, as we understand it in biological terms, may not be attainable through mere computational scaling.
Contemporary AI is immensely useful, but its successes should not lead us to conclude that the path to AGI has already been found. It is possible that genuine intelligence requires principles we do not yet understand—perhaps ones that touch upon the very nature of consciousness. Whether such principles exist, and whether they can be technically realized, remains an open question.
If consciousness is not an emergent property of complexity alone, but a qualitatively distinct phenomenon rooted in biological organization, then the pursuit of AGI under current paradigms may be fundamentally limited. The challenge is not simply to simulate thought, but to understand what thought and consciousness truly are—and whether they can arise outside the domain of life.

7. Afterthought: Rethinking the Foundations

Recent advances in generative AI have reignited debates about the nature of intelligence and the boundaries of simulation. While these systems exhibit impressive capabilities, they remain constrained by the entropy of training data and the absence of intrinsic intentionality. Theoretical insights from information theory and systems biology suggest that this limitation is not merely technical but ontological, as discussed, for example, in [18].
Conversely, some theories argue that consciousness can emerge from complexity [19], which clearly indicates that such foundational questions remain open. At the same time, debates in philosophy of mind and cognitive science highlight that we still lack a definitive account of what consciousness is or how it arises, leaving open whether its emergence depends solely on computational complexity or on deeper organizational principles unique to living systems. This paper contributes to the ongoing discourse by offering the possibility that consciousness and autonomy may require principles beyond computation—principles rooted in self-organizing, living systems. While some argue that consciousness could eventually be simulated through sufficient complexity, such claims remain speculative. The fact that 2 kg of elementary particles, when arranged in a particular configuration (the brain), give rise to conscious experience suggests that material organization matters. Yet whether computable artificial systems can replicate this — whether the brain itself is composed solely of computable components or whether its organization depends on as-yet-unknown or poorly understood non-computational phenomena — remains an open question. The horse does not need to be fast to be alive; the motorcycle does not need to be alive to be fast. Perhaps the real question is not how to simulate intelligence and consciousness, but whether we truly understand what intelligence and consciousness are.
This perspective also leaves open the possibility that future science may uncover principles we do not yet understand. Consciousness or autonomous intelligence may depend on specific forms of material organization, causal closure, or non-computational processes that cannot be implemented within current digital architectures. Until such principles are identified, we cannot assume that increasing computational power or architectural complexity will suffice to produce consciousness. Present AI systems can simulate intelligent behavior with remarkable fidelity, but simulation alone may not constitute a path toward genuine experience.
Some exploratory work has even proposed that living systems might draw on subtle forms of organization not captured by current computational models—for example, the tentative hypothesis of an unobserved informational reservoir influencing biological dynamics in real time [20]. Such possibilities remain highly speculative, yet they illustrate how much we still do not understand about the principles underlying life and consciousness.
It is possible that the long-sought boundary between the living and the non-living is connected to the limitations of today’s computability-based AI. Such systems may even help us discern the contours of that boundary—if, indeed, such a boundary exists.

Funding

This research received no external funding. The work was conducted independently by the author, who is employed by Czech Radio, but the research was carried out privately and outside of institutional duties.

Data Availability Statement

The original contributions presented in this study are included in the article. This manuscript presents a theoretical framework and does not report empirical data.

Acknowledgments

The author thanks his colleagues for their valuable insights and discussions that helped shape the ideas presented in this paper. This research was conducted independently, without institutional funding. Some passages of this manuscript—including the figures—were prepared or refined with the assistance of a large language model (LLM, namely Microsoft Copilot, version 2025). The author takes full responsibility for the content and conclusions presented herein.

Conflicts of Interest

The author declares that no competing interests exist.

References

  1. Larson, E.J. The Myth of Artificial Intelligence; Harvard University Press: Cambridge, MA, USA, 2021. [Google Scholar]
  2. Peirce, C.S. Collected Papers of Charles Sanders Peirce; Harvard University Press: Cambridge, MA, USA, 1931. [Google Scholar]
  3. Polanyi, M. The Tacit Dimension; University of Chicago Press: Chicago, IL, USA, 1966. [Google Scholar]
  4. Kalai, A.T.; Nachum, O.; Vempala, S.S.; Zhang, E. Why Language Models Hallucinate. arXiv 2025, arXiv:2509.04664. [Google Scholar] [PubMed]
  5. Dennett, D.C. Consciousness Explained; Little, Brown and Company: New York, NY, USA, 1991. [Google Scholar]
  6. Nagel, T. What is it like to be a bat? Philos. Rev. 1974, 83, 435–450. [Google Scholar] [CrossRef]
  7. Chalmers, D. Facing up to the problem of consciousness. J. Conscious. Stud. 1995, 2, 200–219. [Google Scholar]
  8. Searle, J.R. Minds, brains, and programs. Behav. Brain Sci. 1980, 3, 417–444. [Google Scholar] [CrossRef]
  9. Seth, A.K.; Bayne, T. Theories of consciousness. Nat. Rev. Neurosci. 2022, 23, 307–321. [Google Scholar] [CrossRef] [PubMed]
  10. Maturana, H.R.; Varela, F.J. Autopoiesis and Cognition: The Realization of the Living; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
  11. Moreno, A.; Mossio, M. Biological Autonomy: A Philosophical and Theoretical Enquiry; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  12. Butlin, P.; Long, R.; Elmoznino, E.; Bengio, Y.; Birch, J.; Constant, A.; Deane, G.; Fleming, S.M.; Frith, C.; Ji, X.; et al. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv 2023. [Google Scholar] [CrossRef]
  13. Kauffman, S.; Roli, A. The World is Not a Theorem. Entropy 2021, 23, 1467. [Google Scholar] [CrossRef] [PubMed]
  14. Su, J. Consciousness in AI: A Philosophical Perspective Through Motivation. Crit. Debates Humanit. 2024, 3, 1–9. [Google Scholar]
  15. Froese, T.; Taguchi, S. The problem of meaning in AI and Robotisc. Philosophies 2019, 4, 14. [Google Scholar] [CrossRef]
  16. Baars, B.J. In the Theater of Consciousness; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
  17. Floridi, L. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  18. Straňák, P. Lossy Loops: Shannon’s DPI and Information Decay in Generative Model Training. Preprints 2025, 7, 2025072260. [Google Scholar] [CrossRef]
  19. Feinberg, T.E.; Mallatt, J. Phenomenal Consciousness and Emergence: Eliminating the Explanatory Gap. Front. Psychol. 2020, 11, 1041. [Google Scholar] [CrossRef] [PubMed]
  20. Straňák, P. An Unobserved Informational Reservoir: A Hypothesis for the Stability and Functional Directionality of Living Systems. Preprints 2026, 2, 2025120289. Available online: https://www.preprints.org/manuscript/202512.0289 (accessed on 3 November 2025).
Figure 1. Graphical illustration of the main differences between living and artificial systems. A horse “replicates” itself, draws the necessary information from within, and obtains energy and material directly from its environment. In contrast, a motorcycle requires blueprints, a factory, machines, and external sources of energy and materials for its production, operating entirely within the currently known laws of physics and computability.
Figure 1. Graphical illustration of the main differences between living and artificial systems. A horse “replicates” itself, draws the necessary information from within, and obtains energy and material directly from its environment. In contrast, a motorcycle requires blueprints, a factory, machines, and external sources of energy and materials for its production, operating entirely within the currently known laws of physics and computability.
Philosophies 11 00020 g001
Figure 2. AI implemented in a data center can create the illusion of life—as if machines had begun to think. Yet in principle, the same functional results could be achieved using billions of mechanical gears performing exactly the same operations. In such a case, however, the intuitive sense of a “living machine” would not arise. The question of whether all natural systems—including the brain—are fully computable remains open. Computable AI may reproduce certain functional outputs, but whether it can ever resemble a conscious brain is a problem that current science is not yet able to resolve with confidence.
Figure 2. AI implemented in a data center can create the illusion of life—as if machines had begun to think. Yet in principle, the same functional results could be achieved using billions of mechanical gears performing exactly the same operations. In such a case, however, the intuitive sense of a “living machine” would not arise. The question of whether all natural systems—including the brain—are fully computable remains open. Computable AI may reproduce certain functional outputs, but whether it can ever resemble a conscious brain is a problem that current science is not yet able to resolve with confidence.
Philosophies 11 00020 g002
Table 1. Structural and functional contrasts between living systems and current artificial constructs.
Table 1. Structural and functional contrasts between living systems and current artificial constructs.
AspectLiving Systems (e.g., Horse)Artificial Machines
(e.g., Motorcycle/AI)
OriginSelf-generated (biological reproduction)Externally constructed (factory, design)
Information SourceInternal (DNA, cellular processes)External (blueprints, programming, datasets)
Energy AcquisitionAutonomous (metabolism, environment)Dependent on external input (fuel, electricity)
Self-replicationYes (reproduction)No
Self-regulationYes (homeostasis, adaptation)No evidence beyond predefined feedback mechanisms
IntentionalityIntrinsic (motivation, goals)Simulated or externally assigned objectives
ConsciousnessPresent (subjective experience, qualia)No known phenomenal awareness
DevelopmentEvolves and learns organicallyUpdated via external intervention (AI training, retraining, upgrades)
Ontological StatusAutopoietic (self-creating and sustaining)Heteropoietic (created and maintained from outside)
Table 2. Limitation in Today’s Computable AI Systems (e.g., current LLM-based architectures).
Table 2. Limitation in Today’s Computable AI Systems (e.g., current LLM-based architectures).
ConceptDefinitionRelevance to Living SystemsLimitation in AI
Systems
AutopoiesisSelf-creation and self-maintenance of a system through internal processesLiving organisms generate and sustain themselves autonomouslySystems are externally designed and lack self-generative structure
AbductionReasoning that generates hypotheses from incomplete dataHumans intuitively infer meaning and possibilities beyond given inputsDo not exhibit genuine hypothesis formation; they primarily extrapolate from training data
Phenomenal ConsciousnessSubjective experience—the “what it is like” aspect of being consciousLiving beings possess first-person awareness and qualiaProbably lacks subjective experience or an inner perspective
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Straňák, P. What Artificial Intelligence May Be Missing—And Why It Is Unlikely to Attain It Under Current Paradigms. Philosophies 2026, 11, 20. https://doi.org/10.3390/philosophies11010020

AMA Style

Straňák P. What Artificial Intelligence May Be Missing—And Why It Is Unlikely to Attain It Under Current Paradigms. Philosophies. 2026; 11(1):20. https://doi.org/10.3390/philosophies11010020

Chicago/Turabian Style

Straňák, Pavel. 2026. "What Artificial Intelligence May Be Missing—And Why It Is Unlikely to Attain It Under Current Paradigms" Philosophies 11, no. 1: 20. https://doi.org/10.3390/philosophies11010020

APA Style

Straňák, P. (2026). What Artificial Intelligence May Be Missing—And Why It Is Unlikely to Attain It Under Current Paradigms. Philosophies, 11(1), 20. https://doi.org/10.3390/philosophies11010020

Article Metrics

Back to TopTop