1. Introduction
When the Wright brothers first achieved powered flight, they were not emulating the muscle structure of birds or the flapping of wings. Aeroplanes do not fly like birds, yet they often fly much better. This mechanical divergence from biological inspiration poses a provocative analogy for artificial intelligence: must AI “think” like humans to be effective, or is human-like cognition merely one of many viable paths to intelligent behaviour [
1]?
This paper considers the merits and limitations of designing AI systems that mimic human cognition. We examine whether human thought processes are an optimal blueprint for intelligence or whether diverging from them, just as aviation engineers diverged from ornithology, can lead to superior outcomes.
2. The Human Model: A Source of Insight and Constraint
Human cognition is the only known general intelligence (although some have argued that this point of view is very anthropocentric [
2]). From this point of view, cognitive architectures modelled after human neural mechanisms (such as symbolic reasoning, reinforcement learning, and working memory) offer a solid starting point for AI design [
3]. Cognitive science and neuroscience provide detailed maps of how humans perceive, reason, learn, and act.
Approaches like cognitive architectures (e.g., ACT-R [
4] and SOAR [
5]), neural–symbolic systems, and neuromorphic computing [
6] explicitly seek to capture aspects of human cognition. These systems aim not only for functional intelligence but also for human-comprehensible reasoning, interpretability, and alignment with our values [
7].
However, human cognition is deeply shaped by biological limitations: slow neurons, metabolic constraints, and limited memory. If we are to design systems that are not constrained by biology, should we retain the cognitive idiosyncrasies that arise from it?
3. The Engineering Perspective: Function over Imitation
From an engineering standpoint, the objective is to build systems that work—efficiently, reliably, and scalably. In many domains, AIs already surpass human capabilities using non-human strategies. AlphaGo’s novel strategies [
8], GPT-style models’ vast associative capacities [
9], and DeepMind’s AlphaFold [
10] all show that raw performance does not require human-like reasoning.
Much like aircraft exploit the laws of aerodynamics without mimicking birds, AI systems exploit statistical, combinatorial, and algorithmic properties of data without recapitulating the human brain. Such systems can be optimised for scale, speed, and specialisation, with architectures that would be alien to any human mind.
Furthermore, machine learning enables forms of representation and problem-solving that are difficult for humans to comprehend but which achieve remarkable empirical success. In this light, insisting on human-likeness may act as a cognitive bottleneck [
3].
4. The Philosophical and Ethical Challenge
Still, there are good reasons to care about human-like AI. If we desire alignment (AIs that share human goals, ethics, and intuitions), it may be helpful for AIs to reason in ways that are legible to us [
7]. Human-likeness aids interpretability and trust, especially in high-stakes contexts like healthcare, autonomous vehicles, and legal decisions.
Moreover, certain ethical dilemmas, such as whether AIs can have moral status or rights, hinge on whether they can possess human-like consciousness, empathy, or agency. If AIs are built in radically non-human ways, these questions become harder to answer.
The analogy to flight may also be misleading. Unlike the natural constraints of flight, which are well-understood physical laws, intelligence is more intimately tied to goals, values, and contexts. We do not just want AIs to be effective; we want them to be safe, comprehensible, and fair [
1].
5. Toward Hybrid Models of Intelligence
Rather than framing the debate as human-like versus non-human, we may seek hybrid models: systems that combine the strengths of machine scalability and precision with the intuitiveness and social embeddedness of human cognition. This includes integrating symbolic reasoning with statistical learning, or crafting user interfaces that enable humans to interact with complex non-human reasoning [
3].
There is also a role for human-in-the-loop and human-centred AI, where non-human intelligence is made accessible through interpretability tools and value alignment protocols [
7].
6. Conclusions
Planes do not flap their wings, and AIs do not need to think like humans (at least not in all respects) (
Figure 1). Yet just as aviation borrowed ideas from birds before taking off in new directions, AI may benefit from understanding human cognition while remaining free to transcend it.
Designing AIs that think
like humans may help with alignment, safety, and trust. But designing AIs that think
better (differently, creatively, and effectively) may unlock the full potential of machine intelligence. Intelligence might encompass a wider array of attributes than the human model implies [
2], and embracing this diversity could be crucial in developing effective and advantageous AI systems.
Funding
This research was funded by an Accelerate Programme for Scientific Discovery Research Fellowship.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
During the preparation of this manuscript/study, the author used DALL-E for the purposes of creating an image. The author has reviewed and edited the output and takes full responsibility for the content of this publication.
Conflicts of Interest
The author declares no conflicts of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.
References
- Dennett, D.C. Brainchildren: Essays on Designing Minds; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
- Holm, H.; Banerjee, S. Intelligence in animals, humans and machines: A heliocentric view of intelligence? AI Soc. 2024, 40, 1169–1171. [Google Scholar] [CrossRef]
- Lake, B.M.; Ullman, T.D.; Tenenbaum, J.B.; Gershman, S.J. Building machines that learn and think like people. Behav. Brain Sci. 2017, 40, e253. [Google Scholar] [CrossRef] [PubMed]
- Anderson, J.R.; Matessa, M.; Lebiere, C. ACT-R: A theory of higher level cognition and its relation to visual attention. Hum. Comput. Interact. 1997, 12, 439–462. [Google Scholar] [CrossRef]
- Laird, J.E.; Newell, A.; Rosenbloom, P.S. Soar: An architecture for general intelligence. Artif. Intell. 1987, 33, 1–64. [Google Scholar] [CrossRef]
- Mead, C. Neuromorphic electronic systems. Proc. IEEE 1990, 78, 1629–1636. [Google Scholar] [CrossRef]
- Russell, S. Human Compatible: Artificial Intelligence and the Problem of Control; Viking: Woodland Hills, CA, USA, 2019. [Google Scholar]
- Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef] [PubMed]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).