Next Article in Journal
SAR-to-Infrared Domain Adaptation for Maritime Surveillance with Limited Data
Previous Article in Journal
Effect of Using a Visible Camera in a Remote Crack Detection System Using Infrared Thermography on an Actual Bridge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

When Planes Fly Better than Birds: Should AIs Think like Humans? †

by
Soumya Banerjee
Department of Computer Science, Unversity of Cambridge, Cambridge CB3 0FD, UK
Presented at the 1st International Online Conference of the Journal Philosophies, 10–14 June 2025; Available online: https://sciforum.net/event/IOCPh2025.
Proceedings 2025, 126(1), 9; https://doi.org/10.3390/proceedings2025126009
Published: 16 September 2025

Abstract

As artificial intelligence (AI) systems continue to outperform humans in an increasing range of specialised tasks, a fundamental question emerges at the intersection of philosophy, cognitive science, and engineering: should we aim to build AIs that think like humans, or should we embrace non-human-like architectures that may be more efficient or powerful, even if they diverge radically from biological intelligence? This paper draws on a compelling analogy from the history of aviation: the fact that aeroplanes, while inspired by birds, do not fly like birds. Instead of flapping wings or mimicking avian anatomy, engineers developed fixed-wing aircraft governed by aerodynamic principles that enabled superior performance. This decoupling of function from the biological form invites us to ask whether intelligence, like flight, can be achieved without replicating the mechanisms of the human brain. We explore this analogy through three main lenses. First, we consider the philosophical implications: What does it mean for an entity to be intelligent if it does not share our cognitive processes? Can we meaningfully compare different forms of intelligence across radically different substrates? Second, we examine engineering trade-offs in building AIs modelled on human cognition (e.g., through neural–symbolic systems or cognitive architectures) versus those designed for performance alone (e.g., deep learning models). Finally, we explore the ethical consequences of diverging from human-like thinking in AI systems. If AIs do not think like us, how can we ensure alignment, predictability, and shared moral frameworks? By critically evaluating these questions, this paper advocates for a pragmatic and pluralistic approach to AI design: one that values human-like understanding where it is useful (e.g., for interpretability or human–AI interaction) but also recognises the potential of novel architectures unconstrained by biological precedent. Intelligence may ultimately be a broader concept than the human example suggests, and embracing this plurality may be key to building robust and beneficial AI systems.

1. Introduction

When the Wright brothers first achieved powered flight, they were not emulating the muscle structure of birds or the flapping of wings. Aeroplanes do not fly like birds, yet they often fly much better. This mechanical divergence from biological inspiration poses a provocative analogy for artificial intelligence: must AI “think” like humans to be effective, or is human-like cognition merely one of many viable paths to intelligent behaviour [1]?
This paper considers the merits and limitations of designing AI systems that mimic human cognition. We examine whether human thought processes are an optimal blueprint for intelligence or whether diverging from them, just as aviation engineers diverged from ornithology, can lead to superior outcomes.

2. The Human Model: A Source of Insight and Constraint

Human cognition is the only known general intelligence (although some have argued that this point of view is very anthropocentric [2]). From this point of view, cognitive architectures modelled after human neural mechanisms (such as symbolic reasoning, reinforcement learning, and working memory) offer a solid starting point for AI design [3]. Cognitive science and neuroscience provide detailed maps of how humans perceive, reason, learn, and act.
Approaches like cognitive architectures (e.g., ACT-R [4] and SOAR [5]), neural–symbolic systems, and neuromorphic computing [6] explicitly seek to capture aspects of human cognition. These systems aim not only for functional intelligence but also for human-comprehensible reasoning, interpretability, and alignment with our values [7].
However, human cognition is deeply shaped by biological limitations: slow neurons, metabolic constraints, and limited memory. If we are to design systems that are not constrained by biology, should we retain the cognitive idiosyncrasies that arise from it?

3. The Engineering Perspective: Function over Imitation

From an engineering standpoint, the objective is to build systems that work—efficiently, reliably, and scalably. In many domains, AIs already surpass human capabilities using non-human strategies. AlphaGo’s novel strategies [8], GPT-style models’ vast associative capacities [9], and DeepMind’s AlphaFold [10] all show that raw performance does not require human-like reasoning.
Much like aircraft exploit the laws of aerodynamics without mimicking birds, AI systems exploit statistical, combinatorial, and algorithmic properties of data without recapitulating the human brain. Such systems can be optimised for scale, speed, and specialisation, with architectures that would be alien to any human mind.
Furthermore, machine learning enables forms of representation and problem-solving that are difficult for humans to comprehend but which achieve remarkable empirical success. In this light, insisting on human-likeness may act as a cognitive bottleneck [3].

4. The Philosophical and Ethical Challenge

Still, there are good reasons to care about human-like AI. If we desire alignment (AIs that share human goals, ethics, and intuitions), it may be helpful for AIs to reason in ways that are legible to us [7]. Human-likeness aids interpretability and trust, especially in high-stakes contexts like healthcare, autonomous vehicles, and legal decisions.
Moreover, certain ethical dilemmas, such as whether AIs can have moral status or rights, hinge on whether they can possess human-like consciousness, empathy, or agency. If AIs are built in radically non-human ways, these questions become harder to answer.
The analogy to flight may also be misleading. Unlike the natural constraints of flight, which are well-understood physical laws, intelligence is more intimately tied to goals, values, and contexts. We do not just want AIs to be effective; we want them to be safe, comprehensible, and fair [1].

5. Toward Hybrid Models of Intelligence

Rather than framing the debate as human-like versus non-human, we may seek hybrid models: systems that combine the strengths of machine scalability and precision with the intuitiveness and social embeddedness of human cognition. This includes integrating symbolic reasoning with statistical learning, or crafting user interfaces that enable humans to interact with complex non-human reasoning [3].
There is also a role for human-in-the-loop and human-centred AI, where non-human intelligence is made accessible through interpretability tools and value alignment protocols [7].

6. Conclusions

Planes do not flap their wings, and AIs do not need to think like humans (at least not in all respects) (Figure 1). Yet just as aviation borrowed ideas from birds before taking off in new directions, AI may benefit from understanding human cognition while remaining free to transcend it.
Designing AIs that think like humans may help with alignment, safety, and trust. But designing AIs that think better (differently, creatively, and effectively) may unlock the full potential of machine intelligence. Intelligence might encompass a wider array of attributes than the human model implies [2], and embracing this diversity could be crucial in developing effective and advantageous AI systems.

Funding

This research was funded by an Accelerate Programme for Scientific Discovery Research Fellowship.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

During the preparation of this manuscript/study, the author used DALL-E for the purposes of creating an image. The author has reviewed and edited the output and takes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Dennett, D.C. Brainchildren: Essays on Designing Minds; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  2. Holm, H.; Banerjee, S. Intelligence in animals, humans and machines: A heliocentric view of intelligence? AI Soc. 2024, 40, 1169–1171. [Google Scholar] [CrossRef]
  3. Lake, B.M.; Ullman, T.D.; Tenenbaum, J.B.; Gershman, S.J. Building machines that learn and think like people. Behav. Brain Sci. 2017, 40, e253. [Google Scholar] [CrossRef] [PubMed]
  4. Anderson, J.R.; Matessa, M.; Lebiere, C. ACT-R: A theory of higher level cognition and its relation to visual attention. Hum. Comput. Interact. 1997, 12, 439–462. [Google Scholar] [CrossRef]
  5. Laird, J.E.; Newell, A.; Rosenbloom, P.S. Soar: An architecture for general intelligence. Artif. Intell. 1987, 33, 1–64. [Google Scholar] [CrossRef]
  6. Mead, C. Neuromorphic electronic systems. Proc. IEEE 1990, 78, 1629–1636. [Google Scholar] [CrossRef]
  7. Russell, S. Human Compatible: Artificial Intelligence and the Problem of Control; Viking: Woodland Hills, CA, USA, 2019. [Google Scholar]
  8. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef] [PubMed]
  9. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  10. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Planes Fly Better Than Birds: Should AIs Think Like Humans? Image created using DALL-E.
Figure 1. Planes Fly Better Than Birds: Should AIs Think Like Humans? Image created using DALL-E.
Proceedings 126 00009 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Banerjee, S. When Planes Fly Better than Birds: Should AIs Think like Humans? Proceedings 2025, 126, 9. https://doi.org/10.3390/proceedings2025126009

AMA Style

Banerjee S. When Planes Fly Better than Birds: Should AIs Think like Humans? Proceedings. 2025; 126(1):9. https://doi.org/10.3390/proceedings2025126009

Chicago/Turabian Style

Banerjee, Soumya. 2025. "When Planes Fly Better than Birds: Should AIs Think like Humans?" Proceedings 126, no. 1: 9. https://doi.org/10.3390/proceedings2025126009

APA Style

Banerjee, S. (2025). When Planes Fly Better than Birds: Should AIs Think like Humans? Proceedings, 126(1), 9. https://doi.org/10.3390/proceedings2025126009

Article Metrics

Back to TopTop