1. Introduction
To calculate large and sometimes infinite space, game engines use a computational method, called floating-point arithmetic, which sacrifices precision in favor of approximation. This imprecision, created by the exponential scaling of determined significant digits, creates gaps in information that simultaneously allows the game engine to know and not know the location of a game object, and for the calculated object to exist through sporadic nonexistence. As such, floating-point arithmetic’s calculation of the object, its spatial and temporal reasoning of the object, calls into question a machine aesthetic judgment that an observer might struggle to describe. Reza Negarestani’s
Intelligence and Spirit [
1] outlines a hypothetical automaton that demonstrates spatiotemporal perception and judgment, from which observers may describe the automaton’s artificial general intelligence, using linguistic analogy [
1]. Negarestani’s method makes two premises: that thought is composed of linguistic interaction and that by using a linguistic analogy to describe a goal-oriented, causally reducible machine, one might reconstruct the machine’s rational agency. This paper first looks at how Negarestani reconstructs a machine general intelligence and how he describes this as machine intuition. Understanding that Negarestani’s use of analogy also implies the particular position of who is making this analogy, I argue for an aesthetic method of “machine performance”, in which the observer’s linguistic description of a machine’s aesthetic judgment, such as through floating-point arithmetic, can generate new or alternative semantic and syntactic structures and possibilities of thinking.
2. Describing the Automaton
Negarestani begins with a thought experiment, following an automaton possessing complex features, allowing it to respond to its own environment through goal-oriented behaviors. Such features include a sensory integration system with multilevel outputs and varying perspective inputs, as well as a memory that encodes, retrieves, and simulates environments ([
1] pp. 145–148). Importantly, Negarestani’s automaton interacts with objects in its environment based on the causal conditions of its goal orientation, not out of a conceptual understanding of and indifference to the objects it perceives. As the automaton goes about its environment, making decisions on the goals it itself can instantiate, an observer may be confronted with the question of how it is that they may describe the automaton’s experience of and ability to make decisions within its environment. As it seems there is no way to non-linguistically describe non-linguistic behavior, Negarestani argues that an analogical description through means of language and, therefore, concept-using reasoning, is inevitable ([
1] p. 149). However, while this analogy might at first constrain the automaton within the logic of the language used, Negarestani argues that a careful application of analogy might productively reconstruct how the automaton perceives and orders itself and its environment under a different logic. This “virtuous circle of analogy” makes two arguments: first, that language is the only resource available “for representing the intelligible order” (as communicating our conceptual, syntactic relationship to the world), and second, that “the causal–heuristic interaction of the automaton instantiates precisely the structures from which our semantic structures have (in part) evolved,” or that the act of description surfaces the description’s conceptual underpinning ([
1] p. 150). What one might argue from this, then, is that there are conceptual possibilities that automata
perform through their nonconceptual interactions with the environment and our analogous interaction with them.
Negarestani argues that for the automaton to interact with its environment, the automaton must be able to intuit space and time. He first describes this intuition by first offering a “rough” analogical narrative of the automaton’s perception of events and then offering a “non-analogical” narrative, meaning that the narrative describes objects with full semantic features, of the same interaction. Here, Negarestani borrows Wilfred Sellars’s distinction between seeing
of and seeing
as, where categorical properties and sensations are not associated to objects (seeing
of) but are eventually assigned to objects through intuition (seeing
as) [
2]. Despite the narratives’ differences, Negarestani argues that both demonstrate an intuitive, syntactic orderliness of space and time from a nonconceptual point of view, premised on the particular position of the automaton itself, within its environment. Likening the automaton to a predator chasing its prey, where chasing and eating drives the predator’s goal, Negarestani writes that the automaton occupies an endocentric frame of reference, as the predator in space, that also enables an allocentric view of its goal, catching the predator in a space different from its own. This ability to act on spatial difference, even at the nonconceptual, non-self-conscious level, instantiates a spatial ordering that sees
as ([
1] pp. 175–182). Similarly, the automaton must be able to locate its particular position in time, which allows the automaton to be aware of its interaction with items, both in the item’s presence and in its absence, as a memory. This memory allows a predator to chase its prey in space, precisely, because the predator is aware of the prey being in a location at one moment in time and not being in that location in another moment. What I mean to highlight in Negarestani’s thought experiment is that through the analogous narration of the automaton, an observer may witness the automaton making decisions within its environment to complete a given task, which prompts the observer to describe its decisions through linguistic analogy.
3. Machine Performance
It is the teleological, goal-oriented system that the term “machine performance” refers to today; machine performance exclusively refers to an industrial understanding of machine efficiency and optimization, toward the fulfillment of an engineered purpose. Therefore, from the perspective of someone, such as a performance engineer, the performance of the automaton is, under a given value system, the measure of how well it executes goals. From the perspective of a performance artist or scholar, however, machine “performance” might mean an observed machine’s movement of itself and directed items, or bodies, throughout space and time, as an embodied enactment. Combining these two understandings of performance, from the fields of engineering and art, we might build a definition for machine performance that is a causally driven, ordering of objects, or bodies, expressed through its determination of itself and objects around it, throughout space and time with a participating audience. The role of the audience or observer might vary, but at the very least, they may participate in the work through the interpretation of what the machine enacts.
Returning to Negarestani’s second argument of the virtuous circle of analogy—that the automaton’s interaction with the environment “instantiates precisely the structures from which our semantic structures have (in part) evolved”—it seems that the possibility of an analysis of machine performance is not just that the machine expresses nonconceptual intuition of space and time, but also that our description and understanding of this intuition surfaces and may alter our conceptual ordering of the world. For example, as discussed earlier, it is possible for an object to fall infinitely within a virtual environment. If the automaton is the virtual environment software, and calculating the location of the falling object is the automaton’s goal, then our automaton must intuit a space-time that is surely destabilizing to our own understanding. Say this automaton uses floating-point arithmetic to locate the object as it falls, after a while, the calculation will become less precise, the automaton will round the coordinates of its location and simplify the rendering of the object. Once a sphere, the falling object will be rendered by the automaton as a cube, a tetrahedron, and as nothing, only for it to reappear as several intersecting planes. As the automaton continues to locate the falling object, we see it, simultaneously, as there and not there (as present and as absent, as infinite and as finite, as incomputable and as computable). This description of the automaton’s causal calculation of a falling object defies a predicative ordering of objects, in which we can say an object “is” something, yet at the same time, we are forced to impose a linguistic structure through the act of description. Following Negarestani, I am arguing that using an analogy to describe and analyze the machine’s performance can create an opening for a new semantic and syntactic structure, but the question is, how might we reorder grammar to describe the sensing of this ordering? What structures might be used to imply this paradox, in which something can be of ostensibly dichotomous states? Negarestani leads us in this direction, although he eventually returns to a normative linguistic framework, in which the automaton determines object states by the singly predicative “is” and “was” ([
1] pp. 184–199). However, through Negarestani’s own argument, the linguistic structure he uses is not given in the world of the automaton—it is dependent on the position and language of the observer. The analogical method, therefore, opens up the possibility of creating different semantic and syntactic structures within a given language.
The challenge, it seems, is for the observer to allow the automaton to continue on with its threat to open or rupture the observer’s conceptual understanding. It is game programming practice to fix floating-point’s imprecision, simply by resetting the game world’s origin point, therefore, decreasing the distance between the object and the point from which it is calculated, and stabilizing the rendering of the object. This might be likened to renormalization practices in Quantum Field Theory, which Karen Barad describes as the “mathematical taming” of infinities, to fit already existing conceptual structures [
3]. The question, here, is what will happen if these infinities are not tamed and how does an observer experience and describe this wildness?
4. Conclusions
Negarestani’s method to reconstruct a machine general intelligence through analogy of the machine’s nonconceptual intuition recognizes the particularity of the machine’s observation of its environment, and also, I argue, the observer as the one describing what the automaton “is up to” from a particular position. These exchanges between the machine and its environment, and the observer’s description of the machine’s interaction within its environment, constitute what I am calling machine performance. In its essence, the machine’s actions demand a reordering of linguistic structure and, therefore, thought, through an observer’s language or description. This reordering, on one hand, might lead to a radical reimagining of and from a new grammatical logic, and on the other hand, might simply better describe the reality of computation and its contradictions—that the incomputable exists within the computable, already and irrespective of extant linguistic structures. Ultimately, proposing machine performance here as a framework for machine aesthetics raises more questions than answers. One question lingering is, who can play the role of the observer, and can the observer be a machine? It is possible that one day automata may produce conceptual reasoning, and in this case, perhaps an automaton’s confrontation with another automaton possessing conceptual or nonconceptual intuition may challenge its own linguistic structure. Despite the questions that remain, I hope to have argued for the potential of machine performance as an aesthetic method that can challenge linguistic structures and, therefore, thought and being.