Next Article in Journal
Cosmicism and Artificial Intelligence: Beyond Human-Centric AI
Previous Article in Journal
Revisiting Subperiosteal Implants: A Narrative Review of the Contemporary Literature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Intelligent Behaviour as Adaptive Control Guided by Accurate Prediction †

1
Philosophy of Mind and Language, Radboud University, 6525 HT Nijmegen, The Netherlands
2
Department of Philosophy and Cognitive Science, Lund University, 22100 Lund, Sweden
*
Author to whom correspondence should be addressed.
Presented at The 1st International Online Conference of the Journal Philosophies, 10–14 June 2025; Available online: https://sciforum.net/event/IOCPh2025.
Proceedings 2025, 126(1), 12; https://doi.org/10.3390/proceedings2025126012 (registering DOI)
Published: 24 October 2025

Abstract

We build on the predictive processing framework to show that intelligent behaviour is adaptive control, driven by accurate prediction and uncertainty reduction in dynamic environments with limited information. We argue that adaptive control arises through a process of re-concretisation, where learned abstractions are grounded in new situations via embodiment. We use this as an explanation of why AI models often generalise at the cost of detail while biological systems manage to tailor their predictions towards specific environments over time. On this basis, we utilise the notion of embodied prediction to provide a new distinction between biological intelligence and the performance illustrated by AI systems.

1. Introduction

Recent work in cognitive science, biology and artificial intelligence (AI) has renewed interest in defining and assessing intelligence. However, there remains little agreement on what counts as intelligent behaviour in the first place. Legg and Hutter [1] define intelligence as an agent’s ability to achieve goals across diverse environments. This definition highlights goal-directedness and context-independence, but it says little about the role of the body and environment for goal-achievement. This omission is notable, given the central place of embodiment in cybernetics [2] and its current importance for bridging AI and robotics [3].
In this paper, we defend a biological view of intelligence as accurate prediction [4]. We build on the predictive processing framework to show that intelligent behaviour is adaptive control, driven by prediction and uncertainty reduction in dynamic environments with limited information. We further suggest that the role of accurate predictions is for intelligent systems to transfer information across contexts to prepare for action and to control for likely environmental effects. In particular, adaptive control arises through a process of re-concretisation, where learned abstractions are grounded in new situations via embodiment. We use this as an explanation of why AI models often generalise at the cost of detail while biological systems manage to tailor their predictions towards specific environments over time. On this basis, we utilise the notion of embodied prediction to better distinguish between biological intelligence and the performance illustrated by AI systems, a distinction that is not accommodated by alternative accounts that identify intelligence with a capacity for accurate prediction (e.g., [5,6]).
This paper proceeds as follows. First, we present the predictive processing framework and its cybernetic roots. Second, we develop the view of intelligence as embodied predictive control. Third, we contrast biological intelligence with AI systems like large language models (LLMs) based on their differences in accurate re-concretisation. We end with a brief conclusion.

2. Taking a Predictive-Processing Perspective

Many current AI systems, such as large-language models, operate impressively by making predictions. Henaff and colleagues ([7], p. 1) argue that “the essence of intelligence is the ability to predict” and others agree [6,8]. Yet the precise interpretation of this claim, and the relation between prediction and intelligence, remains unsettled.
One of the most developed approaches to prediction in cognition is the predictive processing framework (PP). At its core, PP holds that cognitive systems aim to reduce the mismatch between sensory input and internal predictions [9,10]. This is achieved through information-processing in hierarchical neural architectures that update internal models or act to change the world to fit those models [11]. These two strategies (updating internal representations or acting on the environment) form the basis for adaptive control.
According to the broader free-energy principle (FEP), to achieve adaptive control, organisms minimise long-term uncertainty to maintain homeostasis [12]. This shifts the focus from internal error correction to organism-environment dynamics. Now, embodiment becomes key because organisms adapt not only by updating internal models, but also by shaping their environment, i.e., by performing niche construction. This interaction with the environment reduces future prediction errors and also long-term energetic costs [13,14]. From this embodied-PP perspective, predictions in physiology can be interpreted as cybernetic setpoints that allostatic processes work to achieve [15]. PP proponents of this embodied view suggest that prediction is a necessary condition of action and environmental control, whereby they recall earlier insights from cybernetics.
Cybernetics identifies two core components of adaptive behaviour: feedback and control. Firstly, feedback is essential for organisms to achieve self-regulation. To maintain stable states, a system must compare its internal predictions with sensory input and act to reduce discrepancies between them. Feedback control involves sensors, comparators, and effectors, that is, physiological components that together can detect and minimise such error. Take as an example biological thermoregulation: neurons sense temperature, the hypothalamus integrates these signals against setpoints, and bodily responses such as vasodilation and shivering work to restore a temperature balance.
Secondly, cyberneticists link regulation to the capacity for adaptive control. They characterise cognition as a dynamic process involving perception, choice, and action. On an embodied PP view, prediction errors can be reduced by virtue of using affordances, that is, opportunities for action that emerge from organism-environment interactions [16]. These affordances support allostasis, i.e., the ongoing regulation of internal states under changing conditions by physiological processes that in doing so exert energy. Take von Uexküll’s [17] example of a tick that attempts to drop onto a mammal. For the tick, this is a risky behaviour, but it is evolutionarily tuned in a way that conserves energy in the long term through accurate prediction. Exploration plays a key role here in the tick’s learning of how its environment affords resources it needs to survive [18,19], and as such, it requires an ongoing interaction with the environment.
Currently, there is an ongoing debate among proponents of the PP programme about whether cognition is representational or non-representational, and what kind of embodiment obtains even under versions of the framework that do subscribe to a representationalist view, as we do here, following Ref. [20]. We thus favour a view of predictions as embodied, in the sense that their accuracy is conditioned on their usefulness for action, which is typically understood as involving a dynamic coupling between lower-level predictions and sensory stimuli. However, as will be presented in Section 3, we also support the role of higher-level predictions as abstractions, away from the immediate embodied experience of a stimulus, as this function of prediction serves to model the world and anticipate the future, while improving the accurate transfer of learned information to concrete novel situations a cognitive system may immediately encounter. Thus, intelligence is seen as a capacity for embodied and practical problem-solving. This is also why the PP framework suits well to examine biological intelligence—abstract predictions serve to accomplish biological (practical) needs or requirements for surviving and thriving, if only in the long-term. Artificial systems, on the other hand, do not operate under the same biological and action-oriented practicability constraints.

3. Sophisticating Adaptive Control Through Accurate Prediction

3.1. Spatiotemporal Depth

However, in complex systems, adaptive control arguably requires more than reacting to external feedback. It also depends on the ability to generate accurate predictions about future states of the environment, before these states occur and could influence the perceiving system. It has been argued that generating accurate predictions requires internal spatiotemporal depth: information needs to be integrated across multiple timescales and levels of abstraction within a hierarchically organised predictive system [21,22]. Spatiotemporal depth allows organisms to simulate the likely outcomes of different actions before they occur, a function that Ref. [23] describe as the “vicarious use” of predictions, or what neuroscience literature calls “vicarious trial and error” [24]. Vicarious use of predictions is considered key to adaptive behaviour particularly in environments where direct feedback is delayed, noisy, or insufficient for goal-achievement.
Sims and Pezzulo specify two modes of predictive control. One of these relies on real-time sensory feedback used to update internal models (this is ‘variational free energy minimisation’). The other selects actions based on internally generated expectations of future outcomes (this is ‘expected free energy minimisation’). Notably, the second strategy requires greater internal complexity and deeper spatiotemporal models that can represent unobserved, but possible states. In particular, the functional role of spatiotemporal depth is to enable organisms to detach from the immediate perception-action loop and to predict longer-term outcomes with greater precision.
While Sims and Pezzulo are primarily interested in biological intelligence, spatiotemporal depth—due to its stimulus-independence—also allows for more flexible, abstract, and context-sensitive reasoning, i.e., qualities commonly associated with general intelligence [25]. For example, tasks on the Wechsler Adult Intelligence Scale require subjects to reason across contexts, anticipate outcomes, and abstract from present input to improve their instrumental reasoning and goal-achievement; these are all abilities that benefit from spatiotemporal depth in predictive processing. Thus, spatiotemporal depth is not merely a mark of complexity but a functional requirement for accurate prediction in dynamic, uncertain environments.

3.2. Accuracy

However, prediction alone is not sufficient. In biological systems, accuracy also depends on the capacity to transfer predictions across changing contexts, as needed for survival, and this is affected by the organism’s embodiment and environmental interaction. Biological systems can effectively do this by reducing reliance on trial-and-error learning and conserving energy.
Tjøstheim and Stephens (2022) [4] characterise general intelligence as the capacity to abstract, recognise patterns, and transfer predictions across contexts. This abstraction allows organisms to identify structural similarities between different situations and reuse earlier learned solutions. For example, New Caledonian crows show this capacity by using and adapting tools across environments [26]. Prediction accuracy, in this framework, is a composite of trueness (how well predictions match the world) and precision (how specific or detailed the prediction is). Both dimensions are necessary for biological kinds to exhibit energy-efficient behaviour in real-world environments, and in fact, the principle of ‘vicarious trial and error’ illustrates this well: animals shift from a physical exploration of actual consequences of actions to a mental simulation of likely consequences as their internal predictive models become more spatiotemporally sophisticated. According to the framework, this shift will reduce unnecessary energy expenditure in the long run, thus illustrating how the use of predictions supports metabolic efficiency.
Making accurate predictions is, however, context sensitive. Its value depends on the match between internal model complexity and environmental complexity [27], assuming that organisms must act quickly, adapt to multi-scale changes, and avoid energy-wasting actions—and they must increasingly do so the more complex their environmental conditions are. The employed predictions must therefore be timely, sufficiently detailed, and generalisable across contexts. Intelligence, in this view, depends both on the complexity of an internal model as well as the system’s ability to align its predictions with the dynamic structure and complexity of its environment.

4. Artificial Versus Natural Intelligence

We think that framing intelligence as a form of prediction aptly captures functions such as abstraction, foresight, and planning in artificial systems, given the analogous functional role of hierarchical spatiotemporal organisation in both biological and artificial domains. This analogy is supported by recent work demonstrating that deep convolutional networks implement transformational abstractions via convolution and max-pooling, akin to processing in mammalian visual cortex cells ([28], Chap. 3).
Nevertheless, despite evidence for a shared hierarchical architecture underlying prediction and abstraction, significant disanalogies persist between biological and artificial systems that we think a good account of intelligence or cognition should maintain. For instance, Halina ([29], p. 316) observes that AI systems like AlphaGo (version Alpha Zero), which employs hierarchical Monte-Carlo tree search, can transform conceptual spaces in ways that are inaccessible to human cognition. This divergence may stem from AlphaGo’s use of self-play [30,31], which enables it to exhaustively explore game-space combinations beyond what any individual human could experience. In other words, AlphaGo’s superhuman performance may reflect a ‘brute force’ aggregation of experience rather than a process analogous to human intelligence—even when considering that the stochastic tree-search algorithm is a sophisticated way of avoiding exhaustive search.
Moreover, biological intelligence is fundamentally influenced by ecological conditions, which function as accuracy constraints on how well biological predictions guide appropriate action. Actions are, after all, appropriate only relative to an ecological niche and the agent’s needs or goals. Biological accuracy constraints arise from an organism’s embodied structure and environmental interactions. Although biological systems may sometimes offer an initial inspiration for AI design [32], their accuracy conditions remain distinct from those of biological systems. This is mainly because current AI systems depend on static training datasets, and so the nature of their interaction and embedding in the environments they learn from is fundamentally different.
Take as an example AlexNet: Ref. [33] show that this deep convolutional network confirms the classical, disembodied ‘pure vision’ paradigm. This paradigm holds that vision aims to construct detailed world models hierarchically, and higher processing levels depend on but do not influence lower ones. AlexNet reflects these tenets through its feedforward, hierarchical convolutional layers, which are organised by hierarchically nested lexical entries paired with static images in a WordNet. There is no embodiment or dynamic environmental interaction, and the system’s success in object classification arises independently of sensorimotor engagement or situated action. Ref. [34] (pp. 120–123) argues that AlexNet, despite correctly classifying ‘animal’ versus ‘non-animal’ images misclassified images based on photographic artefacts (‘bokehs’) instead of natural cues as humans would use them, and so it would be misleading to characterise these systems as performing genuine or human-like object recognition. Such limitations underscore our claim that predictive depth and transformational abstraction, although present in deep learning, fail to achieve the predictive accuracy that is achieved by biological systems when considered alone.
An embodied approach clarifies why biological and artificial systems diverge in their accuracy conditions. Biological systems are embedded, interactive, and subject to thermodynamic constraints which are absent in AI systems. Disembodied AI models do not dynamically couple with their environment; they have no ‘grip’ [13]. They instead generate outputs from discrete inputs and remain inert thereafter, even in closed domains like AlphaGo’s self-play (but see Ref. [35] for a move in the dynamical direction). Biological brains, in contrast, continuously interact with changing environments, and adapt predictions via evolved mechanisms of prediction-error minimisation. In other words, what distinguishes biological and artificial intelligence is how they are embedded and connect to their environment.
However, we want to make this working hypothesis more concrete. In particular, we introduce the notion of ‘re-concretisation’ as a core mechanism of biological intelligence that contrasts it to AI. We understand re-concretisation as the process by which abstract representations are refined through embodied experience and direct sensory-motor feedback to successfully make context-sensitive predictions. This contrasts with the kinds of predictions exhibited by AI systems whose abstraction mechanisms operate without any direct coupling to a physical environment, due to which they will likely be unable to re-specify predictions in novel contexts.
Biological systems achieve accurate re-concretisation via anti-Hebbian learning and sparse coding [36,37]. These neural strategies reduce the representational overlap between neuronal assemblies and preserve the fine-grained distinctions critical for perceiving affordances and guiding action. In contrast, LLMs such as those employing ‘mixture of experts’ architectures [38,39] introduce specialised information pathways to achieve an optimal trade-off between generalisation and precision. However, such solutions remain merely engineering tweaks to compensate for these system’s lack of embodied grounding.
This distinction between biological and artificial intelligence, in terms of accurate re-concretisation, is also illustrated in tasks like the Abstraction and Reasoning Corpus (ARC; [40]), where successful problem-solving requires not only identifying abstract patterns but also re-applying them to new perceptual contexts. Biological agents accomplish this well by integrating low-level sensory cues with high-level expectations, and in doing so, they engage in procedural, attentional, and memory systems that have co-evolved for dynamic natural environments while underlying energetic constraints. In contrast, because LLMs rely on static training distributions and have no need to rely on proprioceptive and kinesthetic feedback to learn, they cannot refine abstract knowledge through embodied interaction and feedback, and so generalisation comes with a loss of specificity. Mechanisms for correction, such as reinforcement learning through human feedback may improve AI performance, but this method introduces further dependencies on external cues, rather than self-directed adaptation. The concept of re-concretisation captures this discrepancy to biological systems by suggesting that intelligence may require not only predictive capability but also the capacity to continually re-couple abstraction with concrete details of a specific environmental context.
In essence, re-concretisation can be understood as consisting in first, the recognition of an abstract goal in a novel context, and second the realisation of concrete paths to that goal—e.g., for the crows, a basic goal is to avoid having their food stolen by other crows, and hiding the food is a strategy for achieving that. However, when, e.g., moving from a known forest environment to a novel riverbed environment the previous method for implementing the strategy might no longer suffice—trees and leaves must be replaced by riverbanks and straws for instance [41]. This example shows that paths to goals can vary widely across contexts, and re-concretisation has aspects of both local affordance discovery [42] and imagination of possible paths, as well as interpretation of what goals mean in the novel context—if food is abundant in summer, stealing can become rarer and hiding can be less pressing, but caching might still be necessary to make it through winter. Hence, both goals and paths to those goals can be malleable and subject to re-interpretation in ways that help adaptation.
As a final note, it must be kept in mind that AI algorithms heavily depend on humans in the loop for their creation and continuous adjustment to their environment [43]. Furthermore, humans decide on the usefulness of AI solutions—AI systems are designed to be tools for human purposes. This introduces a general asymmetry between AI (originating from intelligent designers) and humans (as originating from evolutionary non-goal-directed processes). One implication of this asymmetry concerns questions about creativity, which remain still underexplored. While there is work showing that the responses of LLMs for prompts at an individual level look locally ‘creative’, in the sense that they express solutions not immediately apparent in the ingredients constituting the prompt itself, the LLMs’ response is still reusing information that is averaged from human user’s data [44,45]. This becomes apparent at a more global or collective level of analysis, as these studies show, where human responses are on average more novel, as judged by human experts, as compared to a set of LLMs that recreate responses to the same prompt. While we do not wish to exclude the possibility that LLMs could help human users come up with creative ideas or new solutions to existing problems, the contrast between human and artificial intelligence in controlling the space of possible solutions is well worth highlighting, as it is solutions that matter to humans that AI systems’ accuracy is assessed by.

5. Conclusions

Taken together, we have argued that biological intelligence is best understood as adaptive control governed by accurate prediction. Using the predictive processing framework and cybernetic principles, we argued that biological intelligence combines spatiotemporal depth, embodiment, and an active engagement with the environment to create predictions that are precise and can be transferred across contexts. This embodied predictive control allows organisms to anticipate, plan, and act effectively in uncertain environments by grounding abstract knowledge in concrete situations through a process called re-concretisation.
By using this notion, we have highlighted the current gap between embodied biological intelligence and disembodied AI. While natural and artificial intelligence both rely on hierarchical predictive mechanisms, the conditions that make their predictions accurate differ. Biological intelligence depends on continuous sensorimotor engagement, energy constraints, and mechanisms that preserve detail and promote efficient coding, a combination which supports accurate prediction. In contrast, AI depends heavily on the quality and quantity of static training data and engineering tweaks to achieve comparable performance. Because AI lacks a direct and goal-directed way of interacting with the environment, it is more prone to losing details that are relevant to specific contexts. More specifically, they lack the ability to interpret what abstract goals mean in novel contexts and discover affordances and paths that would allow those goals to be reached. However, such details remain key to actions that are appropriate in light of biological needs and niches.

Author Contributions

Conceptualization, N.P., T.A.T. and A.S.; methodology, N.P., T.A.T. and A.S.; investigation, N.P., T.A.T. and A.S.; writing—original draft preparation, N.P., T.A.T. and A.S.; writing—review and editing, N.P.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Legg, S.; Hutter, M. A collection of definitions of intelligence. Front. Artif. Intell. Appl. 2007, 157, 17–24. [Google Scholar] [CrossRef]
  2. Wiener, N. Cybernetics: Control and Communication in the Animal and the Machine; Wiley: New York, NY, USA, 1948. [Google Scholar]
  3. Rajan, K.; Saffiotti, A. Towards a science of integrated AI and robotics. Artif. Intell. 2017, 247, 1–9. [Google Scholar] [CrossRef]
  4. Tjøstheim, T.A.; Stephens, A. Intelligence as accurate prediction. Rev. Philos. Psychol. 2022, 13, 475–499. [Google Scholar] [CrossRef]
  5. Gamez, D. Measuring intelligence in natural and artificial systems. J. Artif. Intell. Conscious. 2021, 8, 285–302. [Google Scholar] [CrossRef]
  6. Gamez, D. The relationships between intelligence and consciousness in natural and artificial systems. J. Artif. Intell. Conscious. 2020, 7, 51–62. [Google Scholar] [CrossRef]
  7. Henaff, M.; Weston, J.; Szlam, A.; Bordes, A.; LeCun, Y. Tracking the world state with recurrent entity networks. arXiv 2016, arXiv:1612.03969. [Google Scholar] [CrossRef]
  8. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  9. Rao, R.P.; Ballard, D.H. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 1999, 2, 79–87. [Google Scholar] [CrossRef]
  10. Knill, D.C.; Pouget, A. The Bayesian brain: The role of uncertainty in neural coding and computation. Trends Neurosci. 2004, 27, 712–719. [Google Scholar] [CrossRef]
  11. Wiese, W.; Metzinger, T. Vanilla PP for philosophers: A primer on predictive processing. In Philosophy and Predictive Processing: 1; Metzinger, T., Wiese, W., Eds.; MIND Group: Frankfurt am Main, Germany, 2017. [Google Scholar] [CrossRef]
  12. Friston, K.J.; Stephan, K.E. Free-energy and the brain. Synthese 2007, 159, 417–458. [Google Scholar] [CrossRef]
  13. Bruineberg, J.; Rietveld, E. Self-organization, free energy minimization, and optimal grip on a field of affordances. Front. Hum. Neurosci. 2014, 8, 599. [Google Scholar] [CrossRef]
  14. Bruineberg, J.; Kiverstein, J.; Rietveld, E. The anticipating brain is not a scientist: The free-energy principle from an ecological-enactive perspective. Synthese 2018, 195, 2417–2444. [Google Scholar] [CrossRef] [PubMed]
  15. Seth, A.K. The Cybernetic Bayesian Brain. In Open MIND; Metzinger, T.K., Windt, J.M., Eds.; MIND Group: Frankfurt am Main, Germany, 2015. [Google Scholar] [CrossRef]
  16. Clark, A. Supersizing the Mind: Embodiment, Action, and Cognitive Extension; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
  17. von Uexküll, J. The new concept of umwelt: A link between science and the humanities. Semiotica 2001, 134, 111–123. [Google Scholar] [CrossRef]
  18. Stephens, D.W.; Krebs, J.R. Foraging Theory; Princeton University Press: Princeton, NJ, USA, 1986. [Google Scholar]
  19. Schulkin, J.; Sterling, P. Allostasis: A brain-centered, predictive mode of physiological regulation. Trends Neurosci. 2019, 42, 740–752. [Google Scholar] [CrossRef] [PubMed]
  20. Clark, A. Surfing Uncertainty: Prediction, Action, and the Embodied Mind; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  21. Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 2013, 36, 181–204. [Google Scholar] [CrossRef]
  22. Hohwy, J. The Predictive Mind; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  23. Sims, M.; Pezzulo, G. Modelling ourselves: What the free energy principle reveals about our implicit notions of representation. Synthese 2021, 199, 7801–7833. [Google Scholar] [CrossRef]
  24. Redish, A.D. Vicarious trial and error. Nat. Rev. Neurosci. 2016, 17, 147–159. [Google Scholar] [CrossRef]
  25. Deary, I.J. Intelligence: A Very Short Introduction; Oxford University Press: Oxford, UK, 2020; Volume 39. [Google Scholar]
  26. Weir, A.A.; Chappell, J.; Kacelnik, A. Shaping of hooks in New Caledonian crows. Science 2002, 297, 981. [Google Scholar] [CrossRef]
  27. Mobus, G.E.; Kalton, M.C. Principles of Systems Science; Springer: Durham, NC, USA, 2015. [Google Scholar]
  28. Buckner, C.J. From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us About the Future of Artificial Intelligence; Oxford University Press: Oxford, UK, 2024. [Google Scholar]
  29. Halina, M. Insightful artificial intelligence. Mind Lang. 2021, 36, 315–329. [Google Scholar] [CrossRef]
  30. Samuel, A.L. Some studies in machine learning using the game of checkers. IBM J. Res. Dev. 1959, 3, 210–229. [Google Scholar] [CrossRef]
  31. Bansal, T.; Pachocki, J.; Sidor, S.; Sutskever, I.; Mordatch, I. Emergent complexity via multi-agent competition. arXiv 2017, arXiv:1710.03748. [Google Scholar] [CrossRef]
  32. Hassabis, D.; Kumaran, D.; Summerfield, C.; Botvinick, M. Neuroscience-inspired artificial intelligence. Neuron 2017, 95, 245–258. [Google Scholar] [CrossRef]
  33. Perconti, P.; Plebe, A. Deep learning and cognitive science. Cognition 2020, 203, 104365. [Google Scholar] [CrossRef]
  34. Mitchell, M. Artificial Intelligence: A Guide for Thinking Humans; Penguin Random House UK: London, UK, 2019. [Google Scholar]
  35. Darlow, L.; Regan, C.; Risi, S.; Seely, J.; Jones, L. Continuous Thought Machines. arXiv 2025, arXiv:2505.05522. [Google Scholar] [PubMed]
  36. Földiák, P. Forming sparse representations by local anti-Hebbian learning. Biol. Cybern. 1990, 64, 165–170. [Google Scholar] [CrossRef] [PubMed]
  37. Olshausen, B.A.; Field, D.J. Sparse coding of sensory inputs. Curr. Opin. Neurobiol. 2004, 14, 481–487. [Google Scholar] [CrossRef] [PubMed]
  38. Jacobs, R.A.; Jordan, M.I.; Nowlan, S.J.; Hinton, G.E. Adaptive mixtures of local experts. Neural Comput. 1991, 3, 79–87. [Google Scholar] [CrossRef]
  39. Chen, Z.; Shen, Y.; Ding, M.; Chen, Z.; Zhao, H.; Learned-Miller, E.G.; Gan, C. Mod-Squad: Designing mixture of experts as modular multi-Task learners. arXiv 2022, arXiv:2212.08066. [Google Scholar] [CrossRef]
  40. Chollet, F. On the measure of intelligence. arXiv 2019, arXiv:1911.01547. [Google Scholar] [CrossRef]
  41. James, P.C.; Verbeek, N.A. The food storage behaviour of the northwestern crow. Behaviour 1983, 85, 276–290. [Google Scholar] [CrossRef]
  42. Loveland, K.A. Discovering the affordances of a reflecting surface. Dev. Rev. 1986, 6, 1–24. [Google Scholar] [CrossRef]
  43. Guest, O. What Does ‘Human-Centred AI’ Mean? arXiv 2025, arXiv:2507.19960. [Google Scholar]
  44. Doshi, A.R.; Hauser, O.P. Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 2024, 10, eadn5290. [Google Scholar] [CrossRef]
  45. Xu, W.; Jojic, N.; Rao, S.; Brockett, C.; Dolan, B. Echoes in ai: Quantifying lack of plot diversity in llm outputs. Proc. Natl. Acad. Sci. USA 2025, 122, e2504966122. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Poth, N.; Tjøstheim, T.A.; Stephens, A. Intelligent Behaviour as Adaptive Control Guided by Accurate Prediction. Proceedings 2025, 126, 12. https://doi.org/10.3390/proceedings2025126012

AMA Style

Poth N, Tjøstheim TA, Stephens A. Intelligent Behaviour as Adaptive Control Guided by Accurate Prediction. Proceedings. 2025; 126(1):12. https://doi.org/10.3390/proceedings2025126012

Chicago/Turabian Style

Poth, Nina, Trond A. Tjøstheim, and Andreas Stephens. 2025. "Intelligent Behaviour as Adaptive Control Guided by Accurate Prediction" Proceedings 126, no. 1: 12. https://doi.org/10.3390/proceedings2025126012

APA Style

Poth, N., Tjøstheim, T. A., & Stephens, A. (2025). Intelligent Behaviour as Adaptive Control Guided by Accurate Prediction. Proceedings, 126(1), 12. https://doi.org/10.3390/proceedings2025126012

Article Metrics

Back to TopTop