Point and Network Notions of Artificial Intelligence Agency †

As intelligent machines are more and more present in our environment, the interest of researchers in the problem of AI (artificial intelligence) agency is growing significantly. In light of this, this paper aims to examine the dominant trends in AI agency research regarding their philosophical implications, as well as to provide a research commentary. Recurring themes of point and network notions of agency are identified, based on which the argument for the dual-process nature of agency perception is presented. Emphasis is placed on the agency attribution phenomenon. The novel hypothesis of a negative correlation between the perceived agency of AI and its interpretability outlines the future research direction.


Introduction
The notion of agency has a long history of intellectual inquiry, occupying a special place in both philosophy of mind and action. While the traditional theory of agency captures the phenomenon as a distinctively human action, explainable in terms of the agent's desires, beliefs, and intentions, in more recent debates researchers shift their focus to non-human agents, such as animals or machines, exercising a similar capability to act, yet without having causally efficacious mental states [1]. Surely machine agency differs from that of humans. Yet we need an inclusive non-human agency conceptual framework, as without one we can "miss and misunderstand the massive changes in intelligent machine design and interactive media use that open up Pandora's box filled with thousands of agents" [2] (p. 62).
In recent years, research on machine agency has been particularly fruitful, as it has been stimulated by the rapid and widespread development of artificial intelligence (AI) technology. The reason for putting AI in the spotlight is its perceived similarity to human agents in terms of the capability to act in our environment. Thanks to recent advances in machine learning, the performance of AI systems (e.g., social robots, voice assistants, chatbots, or video game bots) is reaching the level where we no longer notice their instrumentality, but we begin to perceive their interactivity (cf. [2]). A quest for human-like "thinking machines" implies non-trivial reasons for perceiving AI systems not only as objects but also as sui generis subjects of action. Currently we are experiencing AI's shift from technical artifacts to artificial agents.
This issue is valid, not only for philosophy of action and psychology of agency attribution, but also for a broader debate on the social and ethical impact that technology has on humanity. AI has not only great industrial potential but also unprecedented transformative power. Mariarosaria Taddeo and Luciano Floridi call it "a powerful force that is reshaping daily practices, personal and professional interactions, and environments" [3] (p. 751). As we have reached a point where AI performs human-like actions that impact others in the world, research on AI agency has been more vivid than ever before.
Taking this into account, the paper aims to discuss the latest literature on artificial intelligence agency, and to propose two types of notions of AI agency along with the dual-process model of agency perception as a further research direction.

Point and Network Approaches
While many agency types can be distinguished in recent studies, they usually follow one of two general concepts: the point or the network notion of agency.
Point notions of agency are those capturing agency as a trait of an individual actor, explaining the phenomenon in terms of the agent's internal (functional) organization. In those accounts, an agency can be defined according to various attributes likely to be internal features of an agency-examined entity. Such definitions are many-too many to mention here. However, a few examples can help to paint the right picture.
In one of the most recent works, Danielle Swanepoel examines various accounts of agency to establish minimal criteria which an AI system must meet in order to be considered a genuine agent. Swanepoel suggests that there are four of these: deliberative self-reflection, awareness of self in time, critical awareness of environment, and norm violation [4]. Other exemplary studies identify criteria useful for machine agency conceptualizations such as individuality, interactional asymmetry (being a source of activity) and normativity [5], goal-oriented activity contributing to the agent's own endurance or maintenance [6], intentionality and forethought [7], adaptive regulation, self-reactiveness and self-reflectiveness [6][7][8], and first-person sense of agency [9,10].
One can observe the emergence of an increasing trend toward entirely separate accounts of agency in ongoing research, which I propose to call the network ones. Network notions capture agency not as a fixed essence or a property that something or someone possesses, but as an attribute of many actors' relationships. Such theories are focused on the flow of agency between machine and human actors, giving them equal weight.
Referring to the agency of the network instead of independent actors is typical for actor-network theory and new materialist philosophical traditions. Such an approach is especially relevant in theorizing AI agency as it "Helps to explain how agents interact with each other and allows an analysis of both artificial and non-artificial agents in the same context, avoiding the need to think in human/non-human barriers and ignoring the hierarchical distribution of actors," [11] (p. 8, cf. [12]). In a network agency-status analysis, a human-machine network (HMN) can be perceived as a set of decentralized interconnected nodes that share multiple relations with each other, where AI entities not only take part as passive tools but also actively change them, influencing each other in ways mostly yet unknown to us. However, as the boundaries between the physical, digital, and human factors of an HMN are blurred, one cannot observe the agency of a singular actor-it is displayed only as an emergent product of a process of human-machine interaction with many intertwined agential factors. With the ongoing progress in AI, interactions between humans and intelligent machines become increasingly indistinguishable, which supports the argument (cf. e.g., [13][14][15]).
A new materialist input for a network agency-status analysis is discovered within a Baradian agential realism framework, in which agency emerges in an intra-actional way [2,16]. An intra-action understands agency, not as an inherent property of an individual actor to be exercised, but as a dynamism of forces in which all observable objects are constantly exchanging and diffracting, influencing and working inseparably. Agency is primarily observed for the network, as "Relations do not follow relata, but the other way around," [16] (p. 136). This account acknowledges the impossibility of an absolute separation of an agential system's network-e.g., separation of a human user from an AI tool, as an AI agency cannot emerge without both sides being involved. There are emergent outcomes stemming from the process of humans and AI interacting, unraveling the intertwined nature of network agency (cf. e.g., [14,15,17,18]).
The network approach can be exploited even further when applied to an individual AI entity investigation. Speaking in Latourian terms, a stable alliance of actors so firmly established that we can take its interior for granted is called a black box [19]. The internal properties of a black box do not count as long as we are concerned only with its input and output. One can similarly think of an AI agent as being a stable network of hidden-yetentangled actors itself (as we even use the term 'black boxes' for opaque machine learning algorithms). An AI agent 'box' may include human designers, hardware and software architectures, machine learning algorithms, training and test data, and even its end-users, which are made invisible in AI's everyday use.
The network approaches for agency-status analyses seem more than promising, as they explore and transgress the boundaries of different categories of natural and artificial actors entangled in socio-material relations. This can be especially relevant in the perspective of AI advancements towards general intelligence (AGI) and the intelligence explosion (cf. e.g., [8]).

Dual-Process of Agency Perception
Many definitions of machine agency are bound to the ontological context, in which certain conditions are to be met in order to grant AI agential status. Recently, researchers have been more eager to admit that machine agency exists while differing from human agency, and is "Probably the result of human interaction and perception," [18] (p. 4). Taking this into account, a strong focus should be placed on an epistemological perspective of the AI agency-status problem. This is particularly promising for human-computer interaction (HCI) as well as usability and user experience (UX) studies, as AI systems exert an increasing level of influence on both human and machine actors.
Agency attribution is triggered by a mental mechanism I propose to call the agential stance, which extends the notion of intentional binding (e.g., [36,37]). Daniel Dennett identifies three basic mental strategies for predicting and explaining the behavior of external-world objects he calls 'stances': the physical stance, the design stance, and the intentional stance [38]. Similar to the intentional stance, the agential stance can be regarded as instrumentally rational heuristics for predicting, explaining, and creating stable interpretations of external world phenomena by a mindful-as well as a mindless-attribution of agency (cf. cognitive miser theory, e.g., [39], pp. 65-71). The agential stance does not assume intentionality (or any other perceived mental properties), but rather action-outcome contiguity and intentional binding that trigger a sense of external action ownership in one who is observing actions similar to one's own.
Philosophical research can provide us with many conceptual tools that are useful for AI agency-status examination and that fall into either the point or network category. What I further propose is that the point-network theoretical distinction follows the dual-process nature of AI agency perception in humans. The two types of thinking involved in human cognition and reasoning of machine agency can be explained in a similar way to Daniel Kahneman's System 1 and System 2 processing [40]. An argument can be raised that it is the AI system interpretability which has a major (but not exclusive) impact on HCI and triggers point-type (Type 1) or network-type (Type 2) AI agency-status perceptions.
The final hypothesis I would like to pose regards the existence of a negative correlation between the perceived agency of AI systems and their interpretability. The more the AI system is opaque and hides its inner workings as a nontransparent and poorly explainable black box, the more likely it is that a human user will adopt Type 1 point thinking along with the agential stance. On the other hand, the more AI systems implement explainability, transparency, and interpretability showing their inner working, the more likely it is that a human user will turn to Type 2 network analysis by trying to figure out the role of the AI system in a bigger HMN and the role of its parts.
The proposed distinction fosters research in philosophy of mind and folk psychology of attribution, but its implications may also turn out to be relevant for HCI and UX domains. The two types of agency perceptions may yield differing and sometimes conflicting results. Spontaneous agency attribution associated with the agential stance may improve overall user experience and trigger social reactions when interacting with AI [14,15,23,26]. The network thinking may impede agency attribution and result in "opening the black box". These claims are, however, outside the scope of this paper.

Conclusions
The aim of this paper was to discuss the latest literature on artificial intelligence agency, as well as to identify its dominant trends and provide a research commentary. The review showed that theories of AI agency often follow one of two general concepts: the point or the network notion of agency. The former captures agency as a capacity exercised by individual actors, while the latter focuses on various actors' (human and machine) relations as primarily agential and observable structures. Many similarities to accounts of actor-network theory and new materialist philosophical traditions are shown.
AI agency is explicated as a property perceived and attributed by humans. Agency attribution can be viewed as a mental strategy for interpreting, predicting, and explaining the behavior of AI agents that I propose to call the agential stance. Since we exercise a similar stance towards human actors, agency attribution may involve mindless anthropomorphism and trigger point-type perceptions of the AI agency-status.
An argument can be raised that the point-network theoretical distinction follows the actual dual-process nature of AI agency perception in humans. AI interpretability may affect the attribution process and determine the type of perception. A working hypothesis on the negative correlation between the perceived agency of AI systems and the level of their interpretability should be examined in a further empirical investigation.
While the presented account is yet to be fully explicated, the preliminary research suggests that the point and network dual system of the AI agency-status perception conceptual framework may provide a significant input for further research in not only the philosophy of action but also folk psychology of attribution during human-computer interactions, as well as for AI user experience studies. Even if a strong ontological rationale behind granting AI agential status cannot be reconstructed, there may be some instrumentally rational reasons for attributing agency. This makes perceived agency epistemically relevant for successful and satisfying HCI.