Next Article in Journal
Axiology and the Evolution of Ethics in the Age of AI: Integrating Ethical Theories via Multiple-Criteria Decision Analysis
Previous Article in Journal
Nonlinear Evolution Equations of the Soliton Type: Old and New Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Regulatory Intentionality in Artificial Systems †

Institute of Sociology, Cognitive Science and Philosophy, University of the National Education Commission, 30-084 Kraków, Poland
Presented at the 1st International Online Conference of the Journal Philosophies, 10–14 June 2025; Available online: https://sciforum.net/event/IOCPh2025.
Proceedings 2025, 126(1), 16; https://doi.org/10.3390/proceedings2025126016
Published: 5 November 2025
(This article belongs to the Proceedings of The 1st International Online Conference of the Journal Philosophies)

Abstract

Intentionality, understood as the capacity of systems to be “about” something, remains a central issue in the philosophy of mind and cognitive science. Classical approaches face significant limitations, especially when applied to artificial systems. Representationalism struggles with the symbol grounding problem, functionalism reduces intentionality to causal roles, and enactivism restricts it to biological organisms. This paper proposes a cybernetic perspective in which intentionality is conceived as a regulatory function. Feedback mechanisms and homeostasis enable systems to maintain stability and adapt to changing conditions. Even simple systems may, in this sense, exhibit minimal intentionality. Such an approach allows intentionality to be treated as a graded phenomenon and highlights new possibilities for understanding the agency of artificial intelligence.

1. Introduction

The concept of intentionality, understood as the capacity of mental states to be directed toward something, was explicitly formulated by Franz Brentano, who described it as the distinguishing mark of the mental [1]. Since then, the problem has been developed within phenomenology, analytic philosophy, and cognitive science. Daniel Dennett [2] proposed an interpretationist account of intentionality, while John Searle [3] emphasized the distinction between original and derived intentionality. More recently, enactive approaches to cognition have highlighted the role of embodiment and situated interaction with the environment [4].
The rapid development of artificial intelligence has moved the debate on intentionality beyond the human mind. Artificial systems are now capable of generating responses, predictions, and adaptive behaviors that invite questions about whether, and in what sense, they can be said to exhibit intentionality. Classical theories encounter serious limitations in this context: representationalism struggles with the symbol grounding problem [5], functionalism tends to reduce intentionality to causal roles, and enactivism often restricts it exclusively to biological organisms. Even with enactive robotics, mainstream enactivism often treats metabolic organization as a categorical boundary for intentionality in artificial systems.
This paper proposes a cybernetic perspective that reinterprets intentionality in regulatory rather than representational terms. The point is not to claim that regulation is intentionality, but to show that feedback, homeostasis, and adaptive control can serve as a model for understanding how minimal forms of directedness may arise without invoking semantic content or consciousness.
What I call regulatory intentionality therefore belongs to the domain of quasi-intentionality: a functional or analogical manifestation of “aboutness” that reproduces certain logical and normative features of intentional behavior, grounding its significance in regulatory relations rather than in representational content. The purpose is not to grade intentionality but to clarify the minimal organizational conditions that render such quasi-intentional ascriptions conceptually coherent and empirically useful.

2. Classical Theories of Intentionality: A Critical Overview

Since Franz Brentano identified intentionality as the “mark of the mental”, distinguishing psychological phenomena from all others [1], the concept has been developed in various traditions of philosophy of mind. Contemporary debates revolve around three major paradigms: representationalism, functionalism, and enactivism. Each provides valuable insights but encounters serious challenges when applied to artificial systems.
Representationalism maintains that mental states are intentional insofar as they represent objects or states of affairs [6]. In the context of artificial intelligence, however, this model faces the symbol grounding problem: systems can manipulate symbols syntactically without anchoring them in bodily experience or lived context [5]. As a result, intentionality becomes a formal fiction, and meaning is projected onto the system by the user rather than generated internally. Attempts to rescue this framework through informational accounts [7] encounter similar objections: while systems process data, they do not thereby generate content in the phenomenological sense.
Functionalism defines mental states in terms of their causal roles within a system’s structure [8]. This framework maps neatly onto computational architectures but proves too permissive: any sufficiently complex automaton that meets functional conditions could be deemed intentional. The concept thus risks inflation, losing its specificity. Dennett’s interpretative strategy [2] proposes that intentionality is not an ontological fact but a heuristic stance. Yet this dissolves the distinction between genuine aboutness and merely ascribed description. Critics further argue that functionalism neglects embodiment and lived experience, thereby reducing intentionality to a bare causal network.
Enactivism seeks to break with representationalist traditions by identifying intentionality with dynamic engagement between system and environment. Cognition arises not from internal representations but from embodied action and autopoiesis [4]. This perspective restores the importance of context and embodiment but excludes artificial systems from intentional phenomena. The absence of biological body and metabolism is treated as a categorical barrier, rendering enactivism a conservative and exclusionary view.
None of these approaches provides an adequate framework for capturing the forms of intentionality that might emerge in artificial systems. Representationalism falters on the problem of symbols, functionalism dilutes the concept into causal abstraction, and enactivism restricts intentionality exclusively to biology. This tension signals the need for a reinterpretation, one that conceives intentionality in terms of regulation and adaptation rather than symbolic semantics, causal formalism, or biological embodiment [9].

3. Intentionality as a Function of Feedback and Homeostasis

Classical theories of intentionality have traditionally focused on the question of content. They ask how states can represent objects, events, or states of affairs, and how meaning can be grounded within a system. The regulatory perspective shifts this explanatory order. It assumes that what is primary is not the possession of semantic content but the ability of a system to sustain its organization through feedback loops and homeostatic mechanisms. Models and representations, when they arise, are secondary instruments of control and prediction rather than the foundations of aboutness. Intentionality, in this view, emerges first as the capacity to preserve equilibrium and autonomy in the face of perturbations, and only later as the ability to encode, symbolize, or represent.
In cybernetic terms, regulation occurs through closed feedback loops that keep certain variables within a viability set [10,11]. When variables deviate from these bounds, error signals are generated, and corrective actions are triggered. Such an architecture introduces a minimal form of normativity: the system’s behavior can be evaluated in terms of success and failure with respect to maintaining viability. This evaluative dimension is sufficient to ground a thin form of directedness. The system’s activities are about perturbations and relevant environmental magnitudes precisely insofar as they serve to counteract deviations that threaten its organizational stability.
To avoid trivializing intentionality as mere reactivity, it is necessary to specify minimal conditions for what we call regulatory intentionality. First, the system must exhibit organizational autonomy, meaning that its processes maintain its structure across perturbations rather than being wholly orchestrated from the outside [12,13]. Second, genuine feedback control must be present, with error signals guiding the modulation of activity [10]. Third, there must be identifiable homeostatic variables whose stability defines success or failure for the system [11]. Fourth, the system must demonstrate counterfactual sensitivity, so that the same regulatory principle applies across a variety of contexts, with appropriate variations of response rather than rigid repetition. Fifth, the system must possess plasticity, enabling its regulatory parameters to change as a result of history-dependent processes such as learning or adaptation [11,14]. Sixth, the regulator must embody sufficient internal variety to match the diversity of disturbances it is expected to neutralize, in accordance with Ashby’s Law of Requisite Variety and the Good Regulator theorem [11,15]. Together, these conditions establish a nontrivial sense in which the system’s behavior is norm-governed and directed, distinguishing regulatory intentionality from simple stimulus–response mechanisms.
Once these conditions are in place, regulatory intentionality can be understood as coming in degrees of complexity. The simplest form is reactive regulation, exemplified by the thermostat or the Watt governor, in which a single variable is maintained around a fixed setpoint. At this level, aboutness is extremely thin, restricted to the immediate magnitude that is being stabilized. A more sophisticated form is adaptive regulation, as in Ashby’s homeostat [11], where parameters of control themselves change in order to recover stability after disturbances. In this case, the system’s regulatory scope expands, allowing it to cope with novel perturbations rather than only recurring ones. Model-based regulation constitutes a further step: the system embodies an internal model that anticipates disturbances and selects actions prospectively. According to Conant and Ashby’s Good Regulator theorem, every effective regulator must embody a model of the system it regulates [15]. Here “model” is meant in the operational sense of Conant and Ashby, not as a commitment to propositional semantics. This implies that modeling is not a prior semantic commitment but a consequence of the logic of regulation. At the most advanced level, predictive regulation integrates homeostatic control with mechanisms of anticipation, planning, and forecasting. The free-energy principle [14] and related theories of interoceptive inference [16] describe cognition as the minimization of expected deviations from preferred states, thereby unifying regulation and prediction in a single normative framework.
In this perspective, teleology is naturalized. Goals are not propositions encoded inside the system but the attractors defined by viability constraints. A system acts teleologically insofar as its behavior tends toward the preservation of those constraints [17]. Signals and states acquire their significance from their functional role in sustaining organizational stability. Thus, the apparent semantics of signals is use-relative: a variable is about temperature or load only because its modulation contributes to keeping the system within the viability set. Intentionality is explained as a property of regulatory organization rather than as the presence of intrinsic semantic content.
This account responds to common objections, such as the so-called thermostat fallacy. If a thermostat counts as intentional, then the concept seems trivialized. But mere reactivity is excluded by the criteria outlined above. A thermostat lacks counterfactual richness, plasticity, and requisite variety. Regulatory intentionality requires robustness across perturbations, improvement through learning or adaptation, and norm-governed evaluation of outcomes defined by the system’s own organization. These criteria prevent trivialization while preserving the possibility of minimal intentionality in artificial systems.
The regulatory stance also reframes the relationship between intentionality and representation. Representations may emerge as efficient means of regulation, but they are not the ground of aboutness. Their function is derivative of control requirements. This makes it possible to acknowledge the role of internal models in advanced systems while avoiding the pitfalls of representationalism.
The implications for artificial systems are significant. Designing AI with regulatory intentionality means prioritizing the identification of homeostatic variables or task-specific viability sets and building architectures that maintain these variables within acceptable bounds. It also means ensuring that controllers are adaptive, learnable, and endowed with sufficient internal variety to cope with environmental disturbances. Reinforcement learning can be interpreted as one such regulatory strategy, where rewards encode preferred states and policies evolve to keep trajectories within viable regions, insofar as the reward function captures constraints aligned with task-specific viability sets. Model-based approaches, predictive controllers, and active inference architectures exemplify higher levels of regulatory intentionality, since they anticipate and counteract perturbations before they occur.
Regulatory intentionality conceives aboutness as enacted by feedback and homeostasis. It is inherently normative, graded, and compatible with artificial systems that lack consciousness or biological embodiment. Representation, when present, is a consequence of good regulation rather than its foundation. This framework avoids both the inflation of intentionality characteristic of functionalism and the restriction to biology found in enactivism, thereby offering a more flexible and inclusive account of intentional phenomena in artificial systems. This framework also clarifies how so-called “quasi-intentionality” can be recast as lower-degree regulatory intentionality whenever norm-governed feedback and homeostasis are in place.

4. Quasi-Intentionality and Regulation

Dennett [2] introduced the notion of “quasi-intentionality” to describe systems that, while lacking genuine mental states, can nevertheless be interpreted as if they had beliefs and desires. A chess program, for example, may be said to “want” to protect its queen or “believe” that a certain move is advantageous, though these attributions are understood metaphorically. Similarly, Millikan [18] and Cummins [19] developed accounts in which purpose and meaning are ascribed functionally, without requiring phenomenological intentionality. From the standpoint of regulatory intentionality, the category of quasi-intentionality acquires a new grounding. Systems that maintain stability through feedback and homeostasis exhibit norm-governed behavior, which provides a stronger basis for attributions of “aboutness” than Dennett’s purely interpretative stance. While Dennett’s account risks collapsing intentionality into a pragmatic fiction, the cybernetic approach anchors it in the organizational dynamics of the system itself. In this sense, quasi-intentionality becomes not merely a metaphor but a minimal, graded form of intentionality, tied to the success or failure of regulation.
This shift clarifies why some artificial systems (e.g., thermostats) fall short of genuine intentionality; they lack counterfactual sensitivity, plasticity, or requisite variety, while others, such as adaptive controllers or predictive architectures, come closer to exhibiting non-trivial intentionality. What is often called “quasi-intentionality” may therefore be redescribed as a lower degree of regulatory intentionality, one that captures directedness without positing full semantic content or consciousness.
Recent advances in artificial intelligence make this distinction especially salient. Large language models, for instance, are often described as “believing” or “wanting” certain things because their outputs mimic intentional discourse. From Dennett’s perspective, such talk exemplifies the intentional stance applied for predictive convenience. Yet these models lack the regulatory dynamics that would ground their behavior in normativity. By contrast, adaptive AI agents that integrate reinforcement learning with environmental feedback do instantiate regulatory loops: they adjust policies in response to error signals, optimize homeostatic variables such as reward balances, and in advanced forms employ predictive models to anticipate disturbances. Such systems exhibit a stronger claim to minimal intentionality than purely generative models, because their activity is evaluated against criteria of success and failure internal to their regulatory organization. Stand-alone generative models lack closed sensorimotor loops; agency appears once outputs are re-fed into environment-coupled control.
Seen from this angle, the cybernetic framework helps to discriminate between mere quasi-intentional ascriptions based on interpretative projection and genuine regulatory intentionality grounded in systemic dynamics. It provides a principled way to analyze where, and to what extent, contemporary AI systems cross the boundary from metaphorical intentionality into normatively evaluable forms of directedness.
These considerations delineate the boundary of the regulatory account. Quasi-intentional organization captures the structural conditions under which directedness and normativity emerge in artificial systems, but it does not bridge the gap to phenomenological or semantic intentionality. Its scope remains explanatory rather than constitutive: it describes how systems behave as if they were about something, without implying that they truly are.

5. Discussion

The regulatory account of intentionality demonstrates that the question of “aboutness” need not be resolved in terms of representational content or consciousness. Classical theories remain trapped in a dilemma: either intentionality requires semantic content that artificial systems cannot genuinely possess, or it is confined to biological organisms alone. The cybernetic perspective bypasses this impasse by redefining intentionality as a property of regulatory organization.
On this view, intentionality is not determined by what a system “has in its head” but by how it maintains itself in a dynamic environment. The directedness of its activity is grounded in normative criteria of success and failure, namely whether homeostatic variables remain within the viability set. This provides an account of why even simple regulatory mechanisms exhibit rudimentary forms of intentionality, while more sophisticated predictive architectures manifest higher degrees.
The regulatory approach also has the advantage of preventing conceptual inflation. Not every automaton qualifies as intentional: genuine regulatory intentionality requires organizational autonomy, feedback control, counterfactual sensitivity, plasticity, and requisite variety. These criteria enable us to distinguish mere reactions from normatively evaluable actions. In doing so, the account avoids both the inflationary tendencies of functionalism, which risk labeling any sufficiently complex causal structure as intentional, and the exclusionary tendencies of enactivism, which deny intentionality to artificial systems altogether.
The most significant philosophical consequence of this shift lies in redirecting the debate. The crucial question is no longer whether a system carries intrinsic meanings, but whether it regulates itself in ways that can be normatively assessed. Intentionality emerges as graded and multidimensional rather than binary. This makes it possible to speak of different levels of intentionality, ranging from minimal forms in simple regulators to advanced forms in adaptive and predictive artificial architectures.
By reframing intentionality as regulation, philosophy of mind gains a tool for analyzing artificial systems without anthropomorphizing them. Artificial intelligence can thus be studied as a domain in which new forms of intentionality emerge, forms that do not depend on biological embodiment or consciousness but on the capacity for regulation and adaptation. In this way, the cybernetic account is not only a theoretical alternative but also a practical guide for the design of autonomous systems.
It is worth noting that recent post-phenomenological and computational debates address related questions under the heading of technological intentionality. Recent analyses by Peter-Paul Verbeek [20], Roberto Redaelli [21], and Dmytro Mykhailov and Nicolò Liberati [22] focus on how technology mediates meaning and agency, while studies in computer science explore intentional behavior and goal-directed reasoning in autonomous agents [23,24]. These perspectives highlight the growing convergence between philosophical and computational analyses of agency. The present account situates itself within this broader landscape, offering a cybernetic framework that explains the organizational dynamics underlying such phenomena without invoking phenomenological premises.

6. Conclusions

This paper has argued that intentionality can be reconceptualized in cybernetic terms, not as representational content but as a regulatory function grounded in feedback and homeostasis. By doing so, it becomes possible to explain directedness and normativity in artificial systems without appealing to semantic content or biological embodiment.
The regulatory framework offers a graded and non-trivial account of intentionality: it distinguishes simple reactivity from genuinely norm-governed behavior and provides criteria for evaluating when artificial systems can be said to act about something. In doing so, it avoids the inflationary risks of functionalism and the exclusionary limits of enactivism.
The discussion of quasi-intentionality highlights how interpretative ascriptions (e.g., Dennett’s chess program or large language models) can be re-evaluated within the regulatory perspective. What appears as “quasi-intentionality” under an interpretative stance may, when grounded in feedback, homeostasis, and adaptive control, amount to minimal forms of genuine intentionality. This distinction clarifies the difference between mere projection of meaning and systemic organization that supports norm-governed directedness.
For philosophy of mind, this perspective expands the scope of intentionality beyond consciousness and biology. For the design of artificial intelligence, it points to architectures that prioritize adaptive regulation, homeostasis, and predictive control as the foundations of autonomous agency.
Future work should explore how regulatory intentionality scales in complex environments and how it interacts with higher-level cognitive processes, including representation and reasoning, when these emerge as tools of good regulation rather than as the basis of aboutness. Whether regulatory organization can, under certain conditions, become constitutive of cognition itself remains an open question.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author thanks the organizers of the 1st International Online Conference of the Journal Philosophies (IOCPh 2025) for the opportunity to present an earlier version of this paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Brentano, F. Psychology from an Empirical Standpoint; Routledge: London, UK, 1995. [Google Scholar]
  2. Dennett, D. The Intentional Stance; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
  3. Searle, J. Intentionality: An Essay in the Philosophy of Mind; Cambridge University Press: Cambridge, UK, 1983. [Google Scholar]
  4. Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  5. Harnad, S. The symbol grounding problem. Phys. D Nonlinear Phenom. 1990, 42, 335–346. [Google Scholar] [CrossRef]
  6. Fodor, J.A. Psychosemantics: The Problem of Meaning in the Philosophy of Mind; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
  7. Dretske, F. Naturalizing the Mind; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  8. Putnam, H. Mind, Language and Reality: Philosophical Papers, Volume 2; Cambridge University Press: Cambridge, UK, 1975. [Google Scholar]
  9. Chalmers, D.J. The Conscious Mind: In Search of a Fundamental Theory; Oxford University Press: New York, NY, USA, 1996. [Google Scholar]
  10. Wiener, N. Cybernetics: Or Control and Communication in the Animal and the Machine; MIT Press: Cambridge, MA, USA, 1948. [Google Scholar]
  11. Ashby, W.R. An Introduction to Cybernetics; Chapman & Hall: London, UK, 1956. [Google Scholar]
  12. Di Paolo, E.A. Autopoiesis, adaptivity, teleology, agency. Phenomenol. Cogn. Sci. 2005, 4, 429–452. [Google Scholar] [CrossRef]
  13. Barandiaran, X.E.; Di Paolo, E.A.; Rohde, M. Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adapt. Behav. 2009, 17, 367–386. [Google Scholar] [CrossRef]
  14. Friston, K. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef]
  15. Conant, R.C.; Ashby, W.R. Every good regulator of a system must be a model of that system. Int. J. Syst. Sci. 1970, 1, 89–97. [Google Scholar] [CrossRef]
  16. Seth, A.K. The cybernetic Bayesian brain: From interoceptive inference to sensorimotor contingencies. Open MIND 2015, 35, 1–24. [Google Scholar]
  17. Rosenblueth, A.; Wiener, N.; Bigelow, J. Behavior, Purpose and Teleology. Philos. Sci. 1943, 10, 18–24. [Google Scholar] [CrossRef]
  18. Millikan, R.G. Language, Thought, and Other Biological Categories: New Foundations for Realism; MIT Press: Cambridge, MA, USA, 1984. [Google Scholar]
  19. Cummins, R. Meaning and Mental Representation; MIT Press: Cambridge, MA, USA, 1989. [Google Scholar]
  20. Verbeek, P.P. Cyborg Intentionality: Rethinking the Phenomenology of Human–Technology Relations. Phenomenol. Cogn. Sci. 2008, 7, 387–395. [Google Scholar] [CrossRef]
  21. Redaelli, R. Intentionality Gap and Preter-Intentionality in Generative Artificial Intelligence. AI Soc. 2024, 40, 2525–2532. [Google Scholar] [CrossRef]
  22. Mykhailov, D.; Liberati, N. A Study of Technological Intentionality in C++ and Generative Adversarial Models: Phenomenological and Post-Phenomenological Perspectives. Found. Sci. 2023, 28, 841–857. [Google Scholar] [CrossRef]
  23. Córdoba, F.C.; Judson, S.; Antonopoulos, T.; Bjørner, K.; Shoemaker, N.; Shapiro, S.J.; Piskac, R.; Könighofer, B. Analyzing Intentional Behavior in Autonomous Agents Under Uncertainty. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23), Macao, China, 19–25 August 2023; pp. 372–381. [Google Scholar] [CrossRef]
  24. Ward, F.R.; MacDermott, M.; Belardinelli, F.; Toni, F.; Everitt, T. The Reasons That Agents Act: Intention and Instrumental Goals. arXiv 2024, arXiv:2402.07221. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sarosiek, A. Regulatory Intentionality in Artificial Systems. Proceedings 2025, 126, 16. https://doi.org/10.3390/proceedings2025126016

AMA Style

Sarosiek A. Regulatory Intentionality in Artificial Systems. Proceedings. 2025; 126(1):16. https://doi.org/10.3390/proceedings2025126016

Chicago/Turabian Style

Sarosiek, Anna. 2025. "Regulatory Intentionality in Artificial Systems" Proceedings 126, no. 1: 16. https://doi.org/10.3390/proceedings2025126016

APA Style

Sarosiek, A. (2025). Regulatory Intentionality in Artificial Systems. Proceedings, 126(1), 16. https://doi.org/10.3390/proceedings2025126016

Article Metrics

Back to TopTop