1. Who Invents the Art?
Harold Cohen (1928–2016) began working with algorithmic art in 1968, decades before the art world recognized artificial intelligence as a potential artmaking strategy. During his 50-year collaboration with a computer program he dubbed AARON, Cohen gradually refined its code to produce drawings, and later paintings, of increasing interest to Cohen, and by extension, to a larger public. Over the years Cohen also redefined his own role, which shifted from a programmer who sets rules for AARON to follow to a co-producer who serves as colorist for AARON’s designs. Along with his own experience as an artist, five decades of working with a computational machine to produce art made Cohen especially qualified to judge whether an artificial intelligence can be considered an artist in its own right.
Despite admitting the critical role AARON has played in his life as an artist, Cohen seemed reluctant to grant his robotic partner the status of an artist to the extent that he disqualifies a number of procedural aspects of artistry that can be modeled in computer code. As psychologist Louise Sundararajan points out, “Cohen is explicit about what creativity is not: It is not simply divergent thinking; nor is it simply algorithms and symbol manipulations.” (Sundararajan 2014
) Instead of talking about AARON as an individual artist, Cohen speaks about this collaboration in terms that highlight the personal nature of this human-machine relationship:
Creativity… lay in neither the programer [sic] alone nor in the program alone, but in the dialog between program and programer; a dialog resting upon the special and peculiarly intimate relationship that had grown up between us over the years.
So Cohen locates the artistic potential of machines not in their instrinsic artistry but in their special collaborations with humans, as in his own relationship with AARON. He dislikes the word “creative” and considers the attempt to model human creativity on a computer a vain effort:
AARON will never make a choice to break the rules, nor will it reflect on those constraints as something that it might want to change⋯ AARON has no sense of continuity or sense of experience from one drawing to the next.
Artist Leonel Moura (born 1948), by comparison, is a newer recruit to the machine-human boundary. Moura began experimenting with artificial intelligence and robotic art in the late 1990s and since the 2000s has produced mobile robots that make drawings based on color density. These robots look like they could be AARON’s younger siblings, but their dynamic is collective rather than individual. Moura releases a swarm of them onto a plastic canvas, each armed with ink of a different color and a color sensor that helps its onboard software decide when to mark the PVC surface. The result is a multicolored drawing emerging from hundreds of such individual strokes.
To explain his preoccupation with artmaking robots, Moura points to the long-accepted tradition of artists who have explored unusual processes in order to divorce aesthetic decisions from their own received knowledge and parochial tastes:
Whether a work of art is made directly by a human artist or is the product of any other type of process is nowadays of no relevance. Recent art history shows many examples of art works based on random procedures, fortuitous explorations, objects trouvés, and arbitrary constructions. Surrealism, for example, even tried to take human consciousness out of the loop. More decisive is whether or not new art form expands the field of art.
For Moura, whether his art is made with or without machines is unimportant, so long as it is accepted by the art world’s gatekeepers. This claim might have been controversial for artists of Cohen’s generation, and despite exhibiting in some prestigious venues he in many ways remained an outsider artist to his death. Moura’s career, however, benefits from the expanded acceptance in the past three decades of a contextual definition of art first suggested by Marcel Duchamp:
The main issue of art is art itself: its history, evolution, and innovative contributions. Anything can be considered art if validated by one of the several art world mechanisms including museums, galleries, specialized media, critics, curators, and/or collectors. Only in this way has the Duchampian ready-made and most of the art produced since been accepted and integrated into the formal art realm.
Moura doesn’t come out and say computer programs can be an artists, but his expanded definition of art, which accepts non-human art-making processes, leaves them a place at the table. While Cohen throws a wet blanket on the possibility of art born of an autonomous machine, Moura makes room for robots to act with aesthetic free will, claiming “I teach the robots how to paint, but afterward, it is not my doing.” Moura sees this as a positive, and perhaps inevitable trend, asserting that “the autonomy of machines is essential to the best interests of humanity.” (Moura 2018
If the shift from Cohen’s to Moura’s viewpoint can be extrapolated to the longer term, it would seem the time will come when machines could make art independent of human agency. Composer Pierre Barreau is already feeding 30,000 music scores into a neural network to generate what he judges to be a neural network’s “original compositions.” Barreau looks forward to a time when machines, not people, custom-tailor scores for individual listeners: “What we’re working on is to make sure that AI can compose hundreds of hours of personalized music for those use cases where human creativity doesn’t scale.” (Barreau 2018
2. Who Invents the Artist?
Their generational and philosophical differences aside, Cohen’s and Moura’s descriptions of their process offer contrasting perspectives over the question of authorship in computational art, whether it is assumed to be the human who writes the algorithms (Cohen) or an autonomous robot that has merely been educated by a human (Moura). Yet another, and in some ways more important, perspective on a work’s creator comes from its viewer.
A thought experiment will explain why. Suppose proof came to light that the drawings and paintings Cohen had attributed to AARON were his sole creations all along, and that his videos of AARON in action were fictions designed to fool unsuspecting art critics. Such a revelation would not change who the true artist was, because that would have always been Cohen all along; but the revelation would have a huge impact on the reception of his work by lay and professional viewers. In a single stroke, we would look at a blue triangle no longer as the result of an inescapable algorithm but as the subjective paint stroke of a human artist. Some might switch from a judgment such as “not bad for a machine” to a judgment like “my four-year-old could do that.” Others might turn from seeing the work as cold and sterile to imagining the work as the product of heartfelt emotion. In this sense, the “artist” as it matters for art history is not the actual person but the construct that lives in the minds of a work’s beholders.
This is the argument of Michel Foucault’s influential 1969 essay, “What Is an Author?” For Foucault, and many of the post-structuralist thinkers who followed him, an artist is not so much a body that lived in particular time and place as it is a conjecture that viewers attach to an artwork to help them make sense of it. Foucault wrote his essay in response to Roland Barthes’ 1967 provocation “The Death of the Author,” which denied any necessary relationship between a text and its creator. Foucault conceded to Barthes that interpreting a text might not require knowing the biography and intent of the actual author, but Foucault insisted that readers trying to interpret the text would infer the qualities of a hypothetical author nevertheless. To give a crude example, readers of Ernest Hemingway don’t need to know he was an adventurous war hero because his writing is already taut and muscular; readers of Marcel Proust don’t need to know he was a sickly shut-in, because his writing is already introspective and full of reverie.
3. The Meta-Artist Function
From this perspective, our central question becomes not “can machines be artists,” but whether a machine can give rise to an “artist function” in Foucault’s sense. Instead of the artist function disappearing when a machine makes art, it may be split in two. We might ascribe one artist function to the non-human machine–perhaps a limited one based on our understanding of the machine’s finite capacity for reasoning. As of this writing, however, algorithms are still programmed by humans, and even neural networks are trained on data selected by humans. So we might ascribe a second, meta-artist function to the machine’s human handler–what legal scholar Annemarie Bridy calls “the author of the author of the works.” (Bridy 2011
How is this different from simply giving both human and machine collaborators credit for the creative act, as Cohen suggests? Remember that the artist function differs from the artist because it is produced by the viewers. So yes, we might learn that Harold Cohen went to art school or that fellow algorithmic artist Jack Ox studied musicology. But we might also look at AARON’s bright shards of blue, green, and orange, and imagine that AARON’s creator had seen Matisse’s The Dancers. Or we might look at the fractured abstractions produced by Jack Ox’s algorithms and infer that the woman who wrote them loves rhythmic composers like Stravinsky.
In other words, the nature of the artist and meta-artist functions and their relationship may vary depending on the analogies viewers are likely to draw from their own experience. Some might see the machine as a tool, in which case they will construct a meta-artist who uses the tool in a more-or-less virtuosic or telltale fashion (as they might when examining brushstrokes in an Abstract Expressionist painting). Other viewers might imagine the machine as a child–in which case, they will construct a meta-artist responsible for educating (setting rules for) the child, but not blame the meta-artist directly for individual mistakes (and thus ally themselves with Moura’s perspective). Still others might imagine the machine as a full-fledged collaborator, in which case they will picture a human artist influencing but also being influenced by the machine’s choices (thus aligned with Cohen’s perspective). In every case, regardless of these differences, viewers are likely to ascribe traces of “personality” in the image to the meta-artist rather than the machine.1
4. When Artists and Artist Functions Diverge
As historical motivation for the birth of the author function, Foucault notes that legal regimes bent on repressing writers of transgressive texts had to conjure someone to blame. During the Reformation, for example, broadsides against the Catholic church mass-printed by Protestants on Gutenberg’s press did not betray the handwriting of their creators; nonetheless, readers could reconstruct the figure of the author from the tone, diction, and substance of each manifesto.
Despite the legal incentive to identify individual writers, Foucault warns us that these reconstructions are fictions, highly contingent on the nature of the text and our own station:
Nevertheless, these aspects of an individual, which we designate as an author (or which comprise an individual as an author), are projections, in terms always more or less psychological, of our way of handling texts: in the comparisons we make, the traits we extract as pertinent, the continuities we assign, or the exclusions we practice. In addition, all these operations vary according to the period and the form of discourse concerned. A ’philosopher’ and a ’poet’ are not constructed in the same manner; and the author of an eighteenth-century novel was formed differently from the modern novelist.
Foucault presumably had human creators in mind for his author function, yet we can apply his conceit equally to nonhuman agents. And as we shall see when examining hypothetical and real cases of nonhuman agents, the “projected author” may have a bearing that is both aesthetic and legal–reminding us that Foucault identified the law as a potent motivation for constructing an author function when authorship is inherently ambiguous.
A case from 2018 of such ambiguous authorship occurred after a young artist who had just graduated from high school, Robbie Barrat, posted a machine learning program he wrote to the code-sharing site GitHub. Barrat programmed this software to learn visual rules from a sample of existing, human-made paintings and generate new, digital paintings in the same style. Having downloaded Barrat’s software, the French artist collective Obvious used the code to generate a new painting in the form of a physical print on canvas, which subsequently sold at auction for $
432,500. Barrat later asked on Twitter, “Am I crazy for thinking that they really just used my network and are selling the results?”2
Barrat’s question suggests we should ascribe authorship to the machine learning software itself, for which he is the presumptive owner. In Barrat’s case, the copyright issues are mute because he distributed his code as open-source software. However, consider a hypothetical case where authorship had more legal impact. Imagine that an AI researcher (“Harold Junior”) rents time on a supercomputer owned by IBM (“Deeper Blue”). Harold Junior trains Deeper Blue in the art of digital painting, after which his silicon apprentice goes on to produce images that its human teacher could not have foreseen on his own. The complexity and aesthetic interest of Deeper Blue’s evolving oeuvre attract the eye of the fashion industry, and Nike inks a 7-figure deal with IBM to display these images on t-shirts and sneakers. Not so fast, says Harold Junior, who sues them both for copyright infringement, claiming he must be the artist, since Deeper Blue is only a software agent.
As hypothetical as this example may seem, a parallel case of nonhuman artistry was debated not in a philosophy symposium but a court of law. In a national park in Indonesia in 2011, nature photographer David Slater set up a free-standing camera in such a way that a troupe of macaque monkeys could trigger the shutter and take photos of themselves. An editor at the Wikimedia Foundation later posted these “monkey selfies” on the Wikimedia website, sharing them freely with the rationale that the works had no authors and were thus in the public domain. The photographer protested, building his claim to the copyright on the allegation that he had created the conditions for the “monkey selfies” to happen by setting up his equipment in such as way as to optimize the chances for the primates to capture their faces. The next year, People for the Ethical Treatment of Animals (PETA) filed a lawsuit against Slater in the name of one of the monkeys, which they called Naruto, claiming that the non-human animal had a copyright over the picture it had taken of itself.
Legal opinions and precedents for Naruto et al. versus David Slater
were equivocal, as we might imagine would also be true of the hypothetical case Harold Junior versus IBM
. Perhaps the clearest precedent in favor of Slater is the UK law covering computer-generated and other non-human creations, which assigns copyright to the person who sets the rules. Nevertheless even this standard admits “However, as robotic software and hardware becomes more ‘cognitive’ and learns and adapts from data inputs, the works created may have no relationship to the original author’s software and so other factors may well come into play.” (Holder et al. 2016
Regardless of legal rectitude, from an aesthetic standpoint the arguments on both sides put too much stress on whether the monkey intended to take a photo. From Foucault’s perspective, what determines the artist function in this case is the reaction viewers have to the particular photos in question. The photos show a seemingly self-possessed macaque, in one case reaching out its hand to trigger the camera, in another grinning widely as it stares into the lens. To many viewers, these images suggested a deliberate self-portrait. If, by contrast, the snapshot triggered by Slater’s apparatus had showed a blurry paw rather than an expressive headshot, the would have been hard-pressed to describe the macaque as “fascinated by her reflection in the lens.” (Daily Mail Reporter 2011
) To take a more extreme variation, suppose a photo of a stand of trees had resulted from a branch falling on the shutter trigger after a windstorm; few would have jumped to the conclusion that one of those trees were taking a selfie, because the photo wouldn’t betray evidence of its author. Unlike these hypothetical counterexamples, the actual photos precipitated a believable author function. Without access to the macaque’s intentions, we have no way of proving this construct true or false, which is why our imagination is prone to fill in the gap in our knowledge. The same is true of viewers beholding works made by artificial agents.
Yet in both the cases of the monkey selfie and art produced by artificial intelligence, our imagined authors need not be limited to the being that tripped the shutter or sprayed the ink on PVC. We can also back up to the meta level and ask ourselves what Slater was trying to construct by positioning his gear in the way he did. Regardless of Slater’s actual intentions, his “algorithm”—a tripod-mounted camera with a large wide angle lens, predictive autofocus, and a shutter release within reach—as well as the fact that he selected the most “selfie“-like images among the many taken by the monkeys—suggests that he was carefully crafting a revealing way to depict the primates he was studying. From this perspective, Slater was the meta-artist of those selfies—a distinction that may clarify how authorship bifurcates in the era of intelligent machines.
5. Can You Kill the Meta-Artist?
The argument made in Naruto et al. versus David Slater
sets another obstacle in the path toward the truly autonomous artmaking machines imagined by Moura and Barreau. Apart from eliminating the human collaborator from the equation, such autonomy would also require expunging any residual meta-artist function constructed by the viewer. Viewers have been conditioned for thousands of years to apply empathy and interpretation to imagine motivations for the creators of works they view, and they are likely to ascribe personalities to the machinic producers of such works as well. Yet humans also know computers are programmed by other people; media scholar Helen Hester argues listeners put more trust in the pronouncements of a feminine digital assistant like Siri than they would an actual woman because they “imagine a male programmer” is ultimately responsible for Siri’s words. 3
Such empathic impulses may inevitably spur the viewer to construct an artistic persona for the absent creator of an art-making machine, but that hasn’t stopped some AI innovators from trying to expunge any meta-artist function from the equation. Moura seems to reject the idea that a viewer would credit him for one of his ArtBot’s drawings just because he “taught” them rules to make paintings. He offers the analogy that “No one will claim that a given novel is the product of the author’s school teacher.” (Moura 2018
) This analogy may hold at some point in the future when artificial intelligences are accorded the same autonomy as human children, but given the law’s focus on human versus non-human actors, they are more likely to be accorded the autonomous status of animals. And it is central to our construct of those personalities that they are highly influenced by their stewards; we are quick to blame a disobedient dog on its careless trainer.
If animals represent one possible analog for AI trained by human handlers, technology scholar Sherry Turkle has researched the embodiment of that analogy in the form of robotic pets such as Aibo, Furby, and PARO. Turkle’s research into purring seals and dancing dogs reminds us that robots don’t have to sit still like texts or finished drawings; in art contexts as in toy stores, they are often seen lighting up or spinning around in reaction to viewers or each other. Turkle thinks such interactions with machines raise again the question Darwin posed to his generation about whether humans are unique: “How will interacting with relational artifacts affect people’s way of thinking about what, if anything, makes people special?” For his part, Moura creates opportunities for viewers to watch robots making drawings, and thus gain even more insight into the artistry of both the machines and their makers:
“From the viewer’s perspective, the main difference from the usual artistic practice is that he/she witnesses the process of making it, following the shift from one chaotic attractor to another. Even though finalized paintings are kept as the memory of an exhilarating event, the true aesthetical experience focus [sic] on the dynamics of picture construction.”
Turkle calls these new technologies “relational artifacts,” yet she stops short of ascribing them the complexity of human relationships.4
While admitting that the children who play with robots form emotional attachments, she notes the same children understand their metallic companions are capable only of “a robot kind of love.” For all the empathy they evoke, Turkle clearly sees computational creatures on a different level from humans: “To say all of this about our love of our robots does not diminish their interest or importance. It only puts them in their place.” (Turkle 2006, p. 10
) Turkle shows that even children recognize that robots aren’t fully autonomous creatures. Were Turkle’s research to applied to the aesthetic sphere, it might be concluded that our limited ability to empathize with artificial intelligence restricts it to “a robot kind of art.”
Perhaps no artificial agent researcher strives harder to remove the meta-artist from the equation than zoologist and artificial life pioneer Tom Ray, whose project Tierra takes a more radical approach to eliminating human influence in a the act of computational creation. Tierra is a self-evolving community of computer viruses with no fitness function apart from survival. To create this artificial ecosystem, Ray created an environment on his hard drive filled with snippets of program code designed to copy themselves with every cycle of his computer. He further programmed this environment to introduce random mutations in the code of these tiny programs. By competing for disk space, the resulting virtual creatures evolve according to the same logic of self-replication and mutation that drives biological evolution, generating new “species” that Ray never could have predicted. These includes classes of programs that, like biological parasites, evolve to live inside host programs, some of which later evolve to be immune to such parasites. Ray himself claims that his creatures are alive by the very fact that they reproduce and evolve.
Yet as neutral and personality-less as Tierra seems, we may still wonder about the intentions of its human architect. Asked if he felt guilty turning off his machine and erasing all his digital progeny, Ray replied that he could always recreate that exact set of creatures just by resetting his Tierra program to the same initial conditions. Ray thus argued that he had created the organisms only in the sense of creating the potential for them to exist. But this clever abjuration of responsibility only offers the viewer more material from which to imagine the mastermind behind Tierra’s synthetic organisms. As in any work of fiction, there will be more than one interpretation: some who hear Ray’s response may imagine him a heartless psychopath, while others will judge him a principled man of science.
More important than the judgment itself is the fact that the question came up in the first place. Whether Tierra’s critters are alive or inanimate, some viewers want to know the motivation behind the man who created them.5
Even for a computational system as seemingly neutral as artificial computer viruses evolving on a hard drive, the astute viewer may be able to reconstruct the cultural assumptions of its human meta-artist. For example, anthropologist Stefan Helmreich, who analyzed the artificial life scientific movement in the late 1980s and 1990s, argues that works such as Tierra are built upon a dualistic and computationalist vision of the living enabled by a “symbolically masculine collapse of life into instrumentally useful information process.” For all their seeming autonomy from their creators, Helmreich finds that such artificial ecosystems embody the historical and cultural mores of the contexts in which they were historically created, such as the Santa Fe Institute. These artificial life meta-artists were predominantly upper middle-class, Judeo-Christian white men influenced by New Age, sci-fi and Western creation stories that did not feature pregnant females. (Helmreich 2000, p. 216
Art is not a measurable fact, like the temperature of bathwater; it is an interpreted condition, like whether bathwater feels warm or cold. As long as humans continue to infer motivations to creative acts, artists will exist as social constructs. So the question we started with—can machines be artists?—is the wrong question. We should instead be asking, what roles does machine-made art leave for artists—imagined or real, flesh or silicon—and the viewers who imagine them.