Next Article in Journal
Reducing the Deterioration of Sentiment Analysis Results Due to the Time Impact
Next Article in Special Issue
The Singularity May Be Near
Previous Article in Journal
A Review and Classification of Assisted Living Systems
Previous Article in Special Issue
When Robots Get Bored and Invent Team Sports: A More Suitable Test than the Turing Test?
Open AccessReview

AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”)

York & Ryerson Joint Graduate Program in Communication & Culture, York University, Toronto, ON M3J 1P3, Canada
Information 2018, 9(7), 183;
Received: 1 July 2018 / Revised: 17 July 2018 / Accepted: 21 July 2018 / Published: 23 July 2018
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)


This paper explores a practical application of a weak, or narrow, artificial intelligence (AI) in the news media. Journalism is a creative human practice. This, according to widespread opinion, makes it harder for robots to replicate. However, writing algorithms are already widely used in the news media to produce articles and thereby replace human journalists. In 2016, Wordsmith, one of the two most powerful news-writing algorithms, wrote and published 1.5 billion news stories. This number is comparable to or may even exceed work written and published by human journalists. Robo-journalists’ skills and competencies are constantly growing. Research has shown that readers sometimes cannot differentiate between news written by robots or by humans; more importantly, readers often make little of such distinctions. Considering this, these forms of AI can be seen as having already passed a kind of Turing test as applied to journalism. The paper provides a review of the current state of robo-journalism; analyses popular arguments about “robots’ incapability” to prevail over humans in creative practices; and offers a foresight of the possible further development of robo-journalism and its collision with organic forms of journalism.
Keywords: artificial intelligence (AI); automated journalism; robo-journalism; writing algorithms; future of news; media ecology; Turing test artificial intelligence (AI); automated journalism; robo-journalism; writing algorithms; future of news; media ecology; Turing test

1. Introduction

Artificial intelligence (AI) is usually defined in two ways that, in a sense, are contradictory. On the one hand, artificial intelligence is intelligence that mimics human intelligence and/or behavior. On the other hand, artificial intelligence is intelligence that is opposite to natural, i.e., human, intelligence. As Russel and Norvig describe it, the first type of definitions measure the success of AI “in terms of fidelity to human performance”, while the second type of descriptions measure the success of AI “against an ideal performance measure, called rationality” [1] (p. 1).
The first approach is called “a human-centered approach”, within which researchers assess if AI is acting humanly or thinking humanly. The second approach is called “a rationalist approach”, within which researchers assess if AI is acting or thinking rationally.
Russel and Norvig’s paradigm can be interpreted through a set of relations between AI and humans. AI either simulates human nature or opposes it.
In the first case, AI simulates humans (either their acting or their thinking) until reaching a level of complete likeness. This is the approach for which the Turing test is applicable.
In the second case, rational AI opposes the irrationality of humans and “does the ‘right thing’, given what it knows” [1] (p. 1). This is the approach to which many sci-fi scenarios involving the rebellion of machines refer, starting with Asimov’s “I, Robot” and including Hollywood’s “Terminator” franchise.
So, AI either performs as a human and exceeds humans as a human simulation or performs as a “smarter” entity and exceeds humans as a being of the next evolutionary level.
Noticeably, both the “simulating” and “opposing” approaches imply such scenarios in which AI substitutes and then replaces humans, either by mimicking or by outdoing them, as inevitable.
The idea of the substitution of humans by artificial intelligence is the ultimate completion of McLuhan’s idea of media as “extensions of man” [2]. Media have extended and enhanced different human faculties over the course of human evolution and civilization’s development. Within such an approach, the advent of artificial intelligence can be seen as an inevitable outcome of the evolution of media. Vice versa, the evolution of media inevitably results in the advent of artificial intelligence (at least at the human stage of evolution, if considered within Teilhard de Chardin’s paradigm of mega-evolution) [3].
To that end, artificial intelligence is the final point of any sufficiently long and conscientious media study. A thrilling fact is that scholars and experts are already discussing this final point in practical contexts.
While the concept of artificial intelligence is quite profoundly developed in sci-fi literature and popular movies, the real development of artificial intelligence is very practical, not that dramatic, and therefore often invisible. A popular view of AI’s development is that this development is occurring out of curiosity. Curiosity may play a role, but in reality there are some practical industrial and market demands for AI.
There are industries that are interested mostly in the computational capacity of AI, such as in air traffic control, social media algorithms, or virtual assistants like Siri or Alexa. Their task is to calculate and cross-analyze big data, with some predictive outcomes. This can be characterized as “very narrow” AI. They are obviously just helpers to humans.
However, there are at least three industries that seek, for very practical reasons, not just to develop better AI, but to completely substitute humans with AI. They are:
The military. Smart war machines are expected to make human-style decisions immediately on the battlefield, which increases the efficiency of their performance while reducing human casualties [4].
The sex industry. Smart sex dolls are expected to completely substitute sex partners and then probably even life partners for humans by simulating human sex and communication behavior [5]. Then, they will highly likely offer “super-human” sex and communication experiences, as any new medium first performs old functions and then creates its own environment.
The media. News-writing algorithms aim to eventually replace human journalists, no matter how this innovation is currently thought of. Even if someone sees robo-journalists just as helpers, a kind of intern in the newsrooms, the ultimate completion of the idea of news-writing algorithms is for them to write the news instead of humans and in a way no worse than humans (in fact, much better, faster, cheaper, and with higher productivity).
Some other areas or industries can also be listed here. The idea is that these areas represent the approach that suggests AI must replace humans. Not just some selected human functions but the very physical presence of humans.
However, a reservation here needs to be made regarding these three industries. The military wants to replace humans, but they do not want smart war machines to be indistinguishable from humans. The application of AI in military actions does not require human likeness; military AI is non-human in its appearance and performance. It replaces people through their distant and enhanced representation or multiplication (as in the case of drone swarms; a drone swarm is quite an interesting new implementation of McLuhan’s idea about media as extensions of men).
AIs for the sex industry and for the media are different. They fully implement the very spirit of the Turing test: AI must be indistinguishable from humans by humans. This is important both for the sex industry and for the media.
Strange as it may seem, the transition from a narrow AI to a general AI may happen in the sex industry and the media.
The sex industry’s demand for AI is well supported by market forces [6] and cultural changes [7]. However, the application of AI in the sex industry is complicated, as it requires physical embodiment and extremely complex behavioral simulation. So, sex-based AI will most likely arrive later than media-based AI, which does not require any physical embodiment and “personal” behavioral peculiarities (so far).
Thus, media scholars are in a position of particular responsibility regarding AI studies. First, as is stated above, any media study that is conducted far enough and fair enough leads to AI. Second, the prototype of AI itself (at least “simulating” AI) will relate to media (including social media).
In fact, media-based narrow AI is already here; it is only the inertia of perception together with the defense mechanisms of people both in the industry and in the audience that prevent the public from admission of the fact that media-based narrow AI is already among us. Many have even interacted with it, most likely without knowing it.
The paper presents the current state-of-the-art about robo-journalism in the following way.
First, several main areas of the narrow AI’s application in journalism are reviewed:
data mining;
commentary moderation;
topic selection;
news writing.
Second, several cases of real-life journalistic Turing tests, though naively conducted, are described. On the basis of these cases, two main journalistic functions are reviewed regarding their execution by robo-journalists:
the ability to process information;
the ability to express information (writing in style).
Third, the paper considers two main popular counterarguments about robots’ “incapability”:
the inability to create;
the inability to understand beauty (to write in style).
The review shows that, for every argument on what robots cannot do, there is a more convincing argument for what robots can do instead (and humans cannot).
Finally, in the section “Roadmap for AI in the media—and beyond” the paper offers some speculations on possible ways for the transition of narrow media-based AI into general AI.
The paper reviews a wide range of academic and popular publications on automated journalism. As the subject is presented only through its manifestation on the market place, the most important evidence on the current state-of-the-art about robo-journalism can be pulled out from different expert observations and also from reflections made by people in the media industry. The paper is organized as a systematized review of such observations and reflections followed by their analysis and some futurological speculations built upon this analysis.
The main idea of the review is that robo-journalism:
outdoes the organic form of journalism in all characteristics regarding data processing;
can compete with humans in the part of the job that relates to writing and style.
The most important revelation of the paper regarding the style of robo-journalism is that robots do not have to write better than humans; they have to write good enough (in order to be indistinguishable and to be hired).
In fact, there is no need for robots to prove they can do journalism better than humans. Such a view is becoming outdated. On the contrary, we are about to enter a market in which human journalists will be required to prove their capacity to perform no worse than robots. The paper aims to show why and how this will happen.

2. The Media: We Are Hiring

In regard to the functions executed by algorithms, their use in the news industry can be roughly divided into several areas:
Data mining;
Topic selection;
Commentary moderation;
Text writing.

2.1. Data Mining

The search for required data and the processing of big data is the most obvious application of algorithms in journalism. An increasing amount of data of all kinds is becoming accessible. The usefulness of withdrawing relevant data from databases is self-evident—it helps journalists in finding correlations and sometimes causations that would not have been found otherwise.
However, even more sophisticated and journalistically specific cases of data mining are already known. For example, the New York Times’s Interactive News team created an application that can recognize the faces of members of Congress by photo. Initially, the application helped reporters to check “which member you’ve just spoken with”. The idea implies that the algorithm compensates for human laziness or incapability (as robots often do). Nevertheless, people on the Times’s Interactive News team insist that the issue is worthy of special attention, as there are 535 members of Congress, and they are always in rotation. Thus, the application aims to strengthen reporters’ confidence and help with fact checking.
As often happens to a new media technology, being first meant performing functions of an older media, it soon unleashes its own superpower and changes the way people use it. Being able to match a photo of a speaker to a database, the congressperson facial recognition application was soon able to perform a detective’s task. After matching a set of pictures dragged from social media, the application helped a reporter detect some congresspersons’ participation in an important event. The reporter got a seminal prompt that they could not have gotten otherwise [8].
The incredible growth of available data has shifted the focus from data collecting and processing to data representation. There is so much data around that even though it is processed and reduced it remains hard to digest. As stated in Marianne Bouchart’s “A data journalist’s microguide to environmental data”, “Things are still quite complicated because we have more data available than before but it is often difficult to interpret and to use with journalistic tools” [9].
The revelations of data journalism often exceed people’s ability to perceive them in textual format, so data-driven journalism (or data journalism) is developing alongside data visualization, and they have been shaping new genres and new sections in the media. Newsrooms have moved much farther along than primary data search and data processing. Algorithms can produce not just crude analytics for human journalists to consider but ready-to-publish textual and non-textual news products of a very specific nature.
The following story is already a milestone in the history of robo-journalism. On 17 March 2014, at 6:25 a.m., journalist and programmer of the Los Angeles Times, Ken Schwencke, was jolted awake by an earthquake. He rushed to his laptop, where he found an e-mail notification sent to him by an algorithm named Quakebot:
L.A. Now: Ready for copyedit: Earthquake: 4.7 quake strikes near Westwood, California
This is a robopost from your friendly earthquake robot. Please copyedit & publish the story. You can find the story at: […]
If the city referenced in the headline is relatively unknown, but the earthquake occurred close to another, larger city, please alter the headline and body text to put that information first.
I am currently not smart enough to make these decisions on my own, and rely on the help of intelligent humans such as yourselves.
Thanks! Quakebot [10].
Schwencke loaded up the Los Angeles Times’ content management system and found Quakebot’s ready-to-publish report:
A shallow magnitude 4.7 earthquake was reported Monday morning five miles from Westwood, California, according to the U.S. Geological Survey. The temblor occurred at 6:25 a.m. Pacific time at a depth of 5.0 miles.
According to the USGS, the epicenter was six miles from Beverly Hills, California, seven miles from Universal City, California, seven miles from Santa Monica, California and 348 miles from Sacramento, California. In the past 10 days, there have been no earthquakes magnitude 3.0 and greater centered nearby.
This information comes from the USGS Earthquake Notification Service and this post was created by an algorithm written by the author.
Read more about Southern California earthquakes [11].
Schwencke thought it looked good, set it live and sent out a tweet [12].
By that time, Quakebot, which was written by Schwencke himself, had been around for two years. Quakebot pulled data (place, time, magnitude of earthquakes) from the U.S. Geological Survey’s Earthquake Notification Service. Then, the robot compared these data to previous earthquakes in this area and defined “the historic significance” of the event. The data were then inserted into suitable sentence patterns, and the news report was ready. The robot uploaded it into the content management system, and sent a note to the editor.
Thus, the LAT became the first media outlet to report on the earthquake—eight minutes after the tremor struck and much earlier than any human journalists managed to report [13]. The intern robo-journalist had outrun its bio-colleagues.
This earthquake report was far from being worthy of a Pulitzer. However, it allowed an editor to publish a news story minutes after the event happened. Needless to say, earthquakes are hot news items in LA, and a fast response time adds significant value to reporting on them.
Another example of data-driven journalism also relates to the Los Angeles Times. The paper’s section of criminal chronicles called “The Homicide Report” [13] has been run by an algorithm since 2007.
As soon as a coroner adds information on a violent death into the database, the robot draws all available data, places it on the map, categorizes it by race, gender, cause of death, police involvement, etc., and publishes a report online. Then, if deemed necessary, a human journalist can gather more info and write an extended criminal news story.
Robot participation has significantly changed traditional criminal coverage. In the past, a journalist would cover only newsworthy crimes, which were crimes with the highest resonance potential. Now, the robot covers absolutely all incidents involving death. The robot also allows data to be observed by categories of race, gender, and neighborhood, and visualized on a scalable and “timeable” county map, which creates secondary content of high importance that otherwise would have been missed in human reporting. The map of crimes composed by the robots has additional value, for instance, for the real estate market. Thus, algorithms add value to news by extracting insights from data analysis that are often unnoticeable to human reporters.
It is worth adding that the Los Angeles Times’ criminal reporting robot covers a territory with a population of 10 million. This is comparable to the population of Sweden or Portugal. Certainly, a bio-journalist is not capable of doing instant statistical calculations on such a scale and quickly (instantly) putting it into synchronic and diachronic contexts within a format that is immediately accessible and easily comprehensible for readers.
Data mining or, more generally, data journalism is particularly in demand for coverage of areas with significant amounts of data—finances, weather, crime, and sports. As has already been stated, data mining can detect highly valuable content that simply cannot be traced by human reporters.
In addition, the internet of things is opening new horizons for data journalism. For example, digitalization of sports creates a completely new type of sports reporting. As stated by Steven Levy back in 2012 in his article for Wired [14], sports leagues have covered every inch of the field and each player with cameras and tags. Computers gather all possible data that one can imagine, such as ball speed, altitude, throwing distance, and the positions of players or even their hands—all made possible through telemetry. A well-trained robot can spot that a pitcher threw his last fastball a little weaker or that a batter leaned left before hitting a winning run. Is this information important? It is, but a human reporter would not notice it. Old-style sports journalism cannot do it, just like the old form of criminal reporting cannot produce an interactive map of the density of murders in an area.

2.2. Topic Selection

The principles of big data analysis and correlation analysis allow algorithms to make quick and precise decisions about what is newsworthy right now by assessing the interests of the audience. These interests manifest themselves in measurable ways: likes, shares, reposts, time spent, etc.
The amount of data that allow measuring human reactions to content will increase, particularly after biometric measurements of human reactions come into play. Eye tracking technologies are already used to understand the nuances of human attention. Algorithms are potentially able to decide what is of interest to humans, with any possible correlation between social-demographic categories and topics of interest.
Robots in newsrooms have already started doing this. Canada’s Globe and Mail uses an algorithm that traces readers’ preferences. Editors still decide which story to develop, but a robot suggests topics “that already have a proven track record with readers online”. As publisher Phillip Crawley described in an interview with Canadian Press, “Instinct of an experienced editor … can’t ever be substituted, but when you’ve got data which constantly feeds and gives you great clarity, there will be great surprises” [15].
The algorithm is able to complete more sophisticated tasks than just tracking what readers like, read, post, discuss (how many of them, for how long, etc.). Having enough data, it is only logical to take a step toward summarizing and analyzing readers’ motives and desires. The next step will be assignment planning. It has been reported that the Globe and Mail “also recently hired a technology expert with a PhD in artificial intelligence to design a ‘predictive modeling’ platform to help determine which stories will interest readers and drive engagement, such as sharing on social media” [15].
This case shows that robots can potentially substitute not just reporters but also editors. At this point, again, algorithms cannot make final decisions about what is interesting for the audience. Also problematic is the idea of thematic planning based on readers’ former preferences. However, a sufficient amount of data about readers’ behavior along with a strong enough predictive model and an instant and constant connection to all relevant sources of potential news can make such editor’s associates very potent and resourceful.
At the end of the day, the editor guesses, but the robot knows. When newsrooms have less data about readers, the editor with their guesswork has all the advantages. With more data about readers, the robot can take over the job.
And yet another consideration is worth mentioning. The analysis of readers’ (human) behavior is at only the beginning of its long and potentially infinite path. Both the quantity of data about the audience and the quality of the processing and cross-processing of these data will accelerate and grow endlessly; whereas, in contrast, the human potential for doing the editor’s job has already fully revealed itself. We are finishing when they are starting.

2.3. Commentary Moderation

Another area for algorithms’ application in the media is fostering healthy conversation online, or commentary moderation. Audience engagement is an important asset in the media business. However, free access to commenting often provokes trolls and spammers. That is why, after falling in love with user-generated content at the end of the 2000s, many newsrooms shut down comment sections in the mid-2010s. Moderation took too much resource to maintain.
It looked strange when news outlets fenced themselves off from the audience. Many explanations were given by the media. See, for example, “Huffington Post to ban anonymous comments” [16] or “Online comments are being phased out” [17]. Popular Science even made a minor sensation in the industry in 2013 with its manifesto “Why We’re Shutting Off Our Comments. Starting today, will no longer accept comments on new articles. Here’s why” [18].
In the late 2010s, a solution seems to have been found. Advanced newsrooms have started applying algorithms to regulate commentaries. For example, the Washington Post and the New York Times in collaboration with the Mozilla Foundation founded the Coral Project, a project that produces open-source software to maintain online communications within and outside newsrooms. Using algorithms, editors and reporters can survey readers, moderate comment sections, and engage the audience in many other ways [19].
As the Coral Project promotes itself, our plugin architecture gives publishers incredible flexibility to select the features that make sense for your community—either across your site, or on a single article.
For Commenters
  • Identify journalists in the conversation;
  • Mute annoying voices;
  • Manage your history;
  • Sort by most replied/liked/newest;
  • Follow and link to single discussions;
  • See new comment alerts instantly;
  • Manage separate identities on each site.
For Moderators
  • Feature the best comments and filter out the worst;
  • Use AI-assisted moderation to identify problems quickly;
  • See detailed commenter histories, and take bulk actions;
  • Integrate with industry-leading spam and abuse technologies;
  • Moderate faster via Slack integration, keyboard shortcuts, and more.
For Publishers
  • Own and manage all of your users’ data;
  • Connect to your existing login system;
  • Reward subscribers/donors with badges—or restrict commenting only to them;
  • Make your comments match your site design;
  • Translate into any language your audience speaks [20].
Such a detailed description is given here to show how powerful and helpful this tool can be. Most interestingly, as with any new medium, it does not just improve old functions (moderation, which would have been impossible for humans to execute on this scale), but also introduce new functions, such as organizing user-generated content for reporters’ further use or for capitalizing on community involvement.
The Washington Post was the first news organization that integrated the Coral Project software called Talk with Modbot, the Post’s own “AI-powered comment moderation technology”. As they described the technology, “Talk’s moderation panel serves up statistics to help moderators understand a commenter’s contribution history at a glance. Then using ModBot, the system can remove comments that violate Post policies, approve comments that don’t, and provide analytics for moderators about the tenor of a conversation” [21].
The New York Times uses another algorithm (they also directly call it AI in their reports) “to host better conversations”. It is reported that,
[NYT] turned to Perspective, a free tool developed by Jigsaw and Google that uses machine learning to make it easier to host good conversations online. Perspective finds patterns in data to spot abusive language or online harassment, and it scores comments based on the perceived impact they might have on a conversation. Publishers can use that information to give real-time feedback to commenters and help human moderators sort comments more quickly. And that means news organizations and content publishers can grow their comment sections instead of turning them off, and can provide their readers with a better, more engaging community forum [22].
Before using algorithms for moderation, the Times struggled to maintain healthy conversations in comments and was only able to enable comments on about 10 percent of articles. After engaging machine-learning algorithms to help human moderators, “The New York Times was able to triple the number of articles on which they offer comments, and now have comments enabled for all top stories on their homepage” [22].
Can algorithms that help moderate commentaries be called a “narrow artificial intelligence?” Maybe not yet. The idea is rather to show in which direction the development of the media is moving. Even if comment-moderating algorithms do not deserve to be called AI yet, they execute a job, in one part of which they have already greatly exceeded humans, and in another part they can already be compared to humans. Namely, moderating algorithms have exceeded in comments tracking in terms of speed and volume. No human can compare to a machine in this. But more intriguing is the ability of algorithms to assess commentaries.
The assessment of other people’s tone and connotation is considered a human privilege and prerogative. However, algorithms are good not just at screening flagged words and expressions, but they are already able to perform semantic analysis. Even if they do not “understand” the essence of the offensive or hate speech concepts, they can use human reactions to comments as a tool of assessment. Then, machine learning comes into play.

2.4. Text Writing

Quakebot of the Los Angeles Times is able to collect and compare data, but the robot also composes ready-to-publish text reports. These reports, of course, are very simple since the robot just uses a set of templates. A criminal reporting robot does not write at all; it filters and categorizes data, puts them on a map, and so on. Analyses of such cases by media critics suggest that robots save time for human journalists, so that humans can do other, truly creative jobs.
The cases of financial and sports robo-journalism are much more complicated. In these fields, robots do not save time for humans; they take over the job entirely.
An algorithm called Wordsmith is one of the most hired and probably the most voluminous news-writing platforms. The algorithm developed by the hi-tech company Automated Insights can analyze data and put them into a coherent narrative with adjusted styles. According to Automated Insights’ website, “Wordsmith is the natural language generation platform that transforms your data into insightful narrative” [23].
One of Wordsmith’s employers, Associated Press, uses the platform to produce earnings reports. Each quarter, companies release earnings and news agencies inform their subscribers about companies’ ups and downs. The speed, accuracy, and analyticity of reporting are important, because subscribers make business decisions based on these reports. So, news agencies make their business on these earnings recaps.
Associated Press had been able to produce only 300 earnings recaps per quarter before it hired Wordsmith. Thousands of potentially important company earnings used to be left unreported. The other problem related to the workload of reporters—earning recaps were “the quarterly bane of the existence of many business reporters”. As New York Magazine’s Kevin Roose put it, corporate earnings were “a miserable early-morning task that consisted of pulling numbers off a press release, copying them into a pre-written outline, affixing a headline, and publishing as quickly as possible so that traders would know whether to buy or sell” [24].
Media automation has solved both problems. The use of Wordsmith increased the Associated Press coverage of corporate earnings over tenfold. Now, the robot writes 4400 earnings stories per quarter. For the robot, it takes seconds to pull numbers from the earnings report, to compare them to previous data from the same company and data of competitors, to make a set of simple conclusions, to compose a smooth narrative structure, and to publish the story. Unlike its organic colleagues, the robot does not complain about how the job is boring and meaningless.
Here is a news story written by the Wordsmith algorithm and published by the Associated Press:
Apple tops Street 1Q forecasts
Apple posts 1Q profit, results beat Wall Street forecasts
AP. 27 January 2015 4:39 p.m.
CUPERTINO, Calif. (AP) _ Apple Inc. (AAPL) on Tuesday reported fiscal first-quarter net income of $18.02 billion. The Cupertino, California-based company said it had profit of $3.06 per share. The results surpassed Wall Street expectations. The average estimate of analysts surveyed by Zacks Investment Research was for earnings of $2.60 per share. The maker of iPhones, iPads and other products posted revenue of $74.6 billion in the period, also exceeding Street forecasts. Analysts expected $67.38 billion, according to Zacks. For the current quarter ending in March, Apple said it expects revenue in the range of $52 billion to $55 billion. Analysts surveyed by Zacks had expected revenue of $53.65 billion. Apple shares have declined 1 percent since the beginning of the year, while the Standard & Poor’s 500 index has declined slightly more than 1 percent. In the final minutes of trading on Tuesday, shares hit $109.14, an increase of 39 percent in the last 12 months [25].
Such recaps can be compiled within less than a second. The robot seizes the facts of the earnings report, makes necessary market synchronic and diachronic comparisons, and generates a rather profound and reasonably coherent text. The news is full of data but made in a meager style. Still, it is a decent narrative. All in all, a financial report does not require stylish decorations.
While experts discuss the perspectives of algorithms’ applications in the media, the scale of algorithms’ applications is already beyond what one might imagine. In 2013, Wordsmith wrote 300 million stories. According to Lance Ulanoff from Mashable, this is more than all the major media companies combined [26]. In 2014, Wordsmith wrote 1 billion stories [27]. In 2016, it wrote 1.5 billion stories [28]. This is probably more than the work of all human journalists combined.
Wordsmith is not the only cyber reporter hired by the media. In the early 2010s, a company called Narrative Science developed StatsMonkey, a writing platform that generated baseball game recaps from applicable data such as players’ activities, game scores, and win probability. Here is a fragment of a children’s baseball league game report written by StatsMonkey,
Friona fell 10–8 to Boys Ranch in five innings on Monday at Friona despite racking up seven hits and eight runs. Friona was led by a flawless day at the dish by Hunter Sundre, who went 2–2 against Boys Ranch pitching. Sundre singled in the third inning and tripled in the fourth inning… Friona piled up the steals, swiping eight bags in all … [14].
StatsMonkey’s unique trait was that it used baseball slang. That was not its only benefit. Children’s games’ results could be input by parents into a special iPhone app during the game. StatsMonkey processed statistics and generated texts almost immediately. The fans, the little baseball players’ Moms and Dads, received a recap of the match even before the little players finished shaking hands on the field. It goes without saying that such recaps, no matter their writing style, were much more important to these fans than a Super Cup report.
In 2011, StatsMonkey wrote 400,000 reports for the children’s league. In 2012, it wrote 1.5 million [14]. For reference, that year, there were 35,000 journalists in the USA [29]. They likely would not be willing to cover Little League games, regardless of how much money they were offered to do so. That is another aspect of robot journalism—algorithms can cover topics that are skipped by human reporters for not being “newsworthy” These topics still find highly loyal readers.
After StatsMonkey, Narrative Science developed an “advanced natural language generation platform” called Quill. Quill analyzes structured data and automatically generates “comprehensive narrative reporting and personalized customer communications” [30] that can be used in the media but also in all sorts of financial market communications.
Narrative Science rented out Quill’s writing skills to financial customers such as T. Rowe Price, Credit Suisse, and USAA. As a company representative said, “We do 10- to 15-page documents for some financial clients”. However, Quill also wrote for Forbes [31]. So, as is reflected in the title of an article about it, “Robot Journalist Finds New Work on Wall Street” [32]. In fact, Quill the robo-journalist in part repeated the professional trajectory of many human financial journalists. After succeeding in financial analysis and reporting, some writers transition to the investment industry to write narrative-based and comprehensive financial reports for investors, partners, and clients. For Quill, as well as for human journalists, working for investment companies is probably more rewarding than for media organizations.
There are also other companies producing natural language generation software. Columbia Journalism Review listed 11 providers of automated journalism solutions in different countries in 2016, stating that,
Thereof, five are based in Germany (AX Semantics; Text-On; 2txt NLG; Retresco; Textomatic), two in the United States (Narrative Science; Automated Insights) and France (Syllabs; Labsense), and one each in the United Kingdom (Arria) and China (Tencent). The field is growing quickly: the review is not even published yet, and we can already add another provider from Russia (Yandex) to the list [25].
In the UK, local newspapers have become involved in the automation project Urbs Media, which is endorsed by a 706,000 euro grant from Google. It aims to create 30,000 localized news reports every month. Urbs Media chose a natural language generation platform developed by Arria “to provide the AI backbone of its service” [33].
Leading media companies such as the Associated Press, Forbes, the New York Times, the Los Angeles Times, and ProPublica have started to automate news content [25]. Many news organizations have also begun developing their own, in-house news writing platform. In fact, an amount of news coverage generated by robots has already been huge. Even though many reports on robo-journalism suggest human reporters not worry about job security and look for ways of collaboration with robots, there are a lot of reasons for worries. The robots already beat humans in quantity; it may occur that assumed superiority of humans in quality is very much overestimated.
Not all cases of automated journalism implement artificial intelligence. However, narrow AI is undoubtedly involved at least in some projects, particularly ones in which robots already squeeze out humans.
It is also logical that news organizations integrate all their automated efforts within newsrooms. Data mining (data journalism), topic selection, commentary moderation (community development), and, finally, text writing are not separate tasks in newsrooms. All these processes are interrelated. Being organized around an intelligent platform, all these tasks not only can be better executed separately, but they also heighten the level of organizational coherence and integration. Some news organizations have already started to build AI-related intelligent news platforms of this next level.
Thus, the Chinese Xinhua News Agency, “has introduced the ‘Media Brain’ platform to integrate cloud computing, the Internet of Things, AI and more into news production, with potential applications ‘from finding leads, to news gathering, editing, distribution and finally feedback analysis’” [34]. As Emily Bell, a professor at the Columbia Journalism School, commented on Twitter, “There are already elements of this in quite a few newsrooms but this is the first announcement (I’ve seen) of a large news org rearranging itself around AI” [34].

3. Turing Test in Journalism: Passed

The most frequent question in discussions about the future of robot-human competitions in journalism is, “Are robots capable of writing better than humans?”
In other words, robots have already surpassed human reporters in data journalism and also in speed and in scale of news coverage. But can they beat humans in writing style?
This question implies two assumptions that are questionable themselves. First of all, do humans write well? What humans? All of them? Second, does the robot need to write better than whom? Salinger and Tolstoy?
In fact, the question about robots’ capability to excel beyond humans in writing implies conducting of a sort of Turing test in journalism. The journalistic Turing test would differ from the classical one, of course—it is not interactive. In the classical Turing test, a human asks a robot (not knowing that it is a robot) and is able to challenge an interlocutor with tricky questions in order to reveal if it is a human or an algorithm. In the journalistic Turing test, there is no interactive aspect; it is just the perception of a completed story as written by a human or a robot.
Such journalistic Turing tests have already been conducted.
In May 2015, Scott Horsley, an NPR White House correspondent and former business journalist, boldly challenged Wordsmith. “We wanted to know: How would NPR’s best stack up against the machine?” NPR wrote [35]. As NPR is a radio station, then a bio-journalist working for them should be very well trained in fast reporting. According to the rules of the competition, both competitors waited for Denny’s, the restaurant chain, to come out with an earnings report. Horsley had an advantage, as he was a Denny’s regular. He even had a regular waitress, Genevieve, who knew his favorite order: Moons Over My Hammy. It did not help; although it depends on how the results are judged.
The robot completed the task in two minutes. It took Scott Horsley a bit more than seven minutes to finish. NPR published both news pieces to offer readers a sort of journalistic Turing test.
Story #1
Denny’s Corporation on Monday reported first-quarter profit of 8.5 million dollars. The Spartanburg, South Carolina-based company said it had profit of 10 cents per share. The results beat Wall Street expectations. The average estimate of four analysts surveyed by Zacks Investment Research was for earnings of 9 cents per share. The restaurant operator posted revenue of $120.2 million in the period, also beating Street forecasts. Three analysts surveyed by Zacks expected $117.1 million. Denny’s shares have risen nearly 6 percent since the beginning of the year. In the final minutes of trading on Monday, shares hit $10.90, a climb of 61 percent in the last 12 months.
Story #2
Denny’s Corporation notched a grand slam of its own in the first quarter, earning a better-than-expected ten cents a share, as restaurant sales jumped by more than 7-percent. Operating revenues topped $120 million. Adjusted net income jumped 36 percent to $8.7 million. Denny’s is one of the nation’s largest full-service restaurant chains. The growth in sales suggests consumers are opening their pocketbooks for pancakes, eggs, and hash browns. Earnings were also helped by lower costs for raw materials. Denny’s results were also helped by the re-opening of the high-volume location inside the Las Vegas Casino Royale restaurant. After sales grew faster than expected in the first three months of the year, managers raised their sales forecast for the remainder of 2015 [35].
The first story was, of course, written by the robot. The figure density is higher and the style is drier. In the meantime, Horsley added a bit of unnecessary information into his version of the financial report, for instance, with this sentence: “the growth in sales suggests consumers are opening their pocketbooks for pancakes, eggs and hash browns”.
Technically, the robot’s vocabulary is larger, as it includes the entire national language’s vocabulary. That is more than 1 million English words. But the robot has to use the most relevant, most conventional, i.e., most frequent words, and that eventually dries up its style. Moreover, the robot’s vocabulary is limited by industrial specialization. To give an example, a robot would never use cooking or sports vocabulary in a financial report.
Humans are the opposite. An educated native English speaker boasts a vocabulary of around only 100,000 words. But a human writer is not limited by word relevancy or frequency, and has the freedom to use the rarest and most colorful words, which broadens context and brings vividness. Moreover, an original style of writing, often “deviating” from rational necessity, is what really makes a human a writer. A good writer can use the wrong wording deliberately, which is completely impossible for a robot writer (even though if it were programmed so, the wrong wording by a robot would not be deliberate in this case; the same goes for a bad human writer—they can use unsuitable words, but not intentionally). Robots simply do not “feel” a need for originality to complete a financial report.
“But that could change”, NPR suggests [35]. If the owner supplies Wordsmith with more versatile NPR stories and modifies the algorithm a little, this kind of redesign could broaden and diversify Wordsmith’s vocabulary. Such things are modifiable. Robo-journalists still will not get the necessity for originality, but they will be able to simulate stylish diversity at a level at which the artificiality of such style coloring will not be noticeable.
Here we are approaching the idea that a sufficiently large number of variances can compensate for writing algorithms’ lack of “senses”, at least at the level of routine consumer perception. The adding of a “random word generator” with specific stylistic instructions can make the product (text) as colorful as a person would do (not to mention that there are not many demands on human journalists regarding the colorfulness of the style).
So, who won the competition? The robot wrote faster and in a more business-like fashion. Scott Horsley was more “human-like” (which makes sense) but slower. The target audience of this writing consists of people in the financial industry. Is the lyrical digression about wallets and pancakes valuable to them? As long as readers are humans and not other robots, it might be.
The result of the contest can most likely be recognized as a tie. However, two minutes against seven for writing the story can be significant margin for radio and financial reporting, where the time of response matters.
Interestingly, the Turing test assesses the human quality of style, but not human productivity, or human accuracy. It implies that style is the most difficult constituent of the human speech (and social) faculty for robots to overcome. The ability of algorithms to overcome other constituents of human speech that are essential for the media has raised no questions.
Another journalistic Turing test, though in a humorous form and for the purposes of entertainment, was offered by the New York Times. The Times composed a quiz that allows readers to guess whether a human or an algorithm wrote a story.
  • “A shallow magnitude 4.7 earthquake was reported Monday morning five miles from Westwood, California, according to the U.S. Geological Survey. The temblor occurred at 6:25 a.m. Pacific time at a depth of 5.0 miles”.
    (This excerpt of an initial report about a March 2014 earthquake was written by an algorithm.—The commentary was opened after passing the quiz; I answered correctly.—A.M.)
  • “Apple’s holiday earnings for 2014 were record shattering. The company earned an $18 billion profit on $74.6 billion in revenue. That profit was more than any company had ever earned in history”.
    (This was an excerpt from an article from Business Insider.—The commentary was opened after passing the quiz; I answered incorrectly.A.M.)
  • “When I in dreams behold thy fairest shade
    Whose shade in dreams doth wake the sleeping morn
    The daytime shadow of my love betray’d
    Lends hideous night to dreaming’s faded form”.
    (This is an excerpt of a poem written by a poetry app.—The commentary was opened after passing the quiz; I answered incorrectly.A.M.)
  • “Benner had a good game at the plate for Hamilton A’s-Forcini. Benner went 2–3, drove in one and scored one run. Benner singled in the third inning and doubled in the fifth inning”.
    (This was a sample report done by Quill, a Narrative Science product.—The commentary was opened after passing the quiz; I answered correctly.A.M.)
  • “Kitty couldn’t fall asleep for a long time. Her nerves were strained as two tight strings, and even a glass of hot wine, that Vronsky made her drink, did not help her. Lying in bed she kept going over and over that monstrous scene at the meadow”.
    (The Russian novel “True Love” was written by a computer in St. Petersburg in 72 h.—The commentary was opened after passing the quiz; I answered incorrectly, being deceived, of course, by the names of Kitty and Vronsky, characters from Tolstoy’s “Anna Karenina”.—A.M.) … [36].
As for some brief reflections on the quiz: even knowing (or because of knowing) the advancement and distinctiveness of robo-journalism, even knowing (or because of knowing) that robots are already producing financial and sport reports, I was not able to differentiate human writing from algorithmic writing confidently. Furthermore, some prejudices about robots’ incapability to compose poetry (although they can; and it is already a well-known fact) forced me to make a mistake and assume that a piece of poetry was written by a human.
That said, at this level of media consumption, robo-writers have already passed the journalistic Turing test. The naivety of this test’s conditions reproduces a real situation of media consumption, so it only strengthens its adequacy and proves the adequacy of its result.
Academics staged a competition between “a horse and a steam-engine”, too. Christer Clerwall, a Media and Communications professor from Karlstad, Sweden, asked 46 students to read two reports [37]. One was written by a robot and another by a human. The human news story was shortened to the size of the robot one. The robot news story was slightly edited by a human, so that its headline, lead, and first paragraphs looked similar to what is usually done by an editor. Students were asked to evaluate the stories based on certain criteria such as objectivity, trust, accuracy, boringness, interest, clarity, pleasure to read, usefulness, coherence, etc.
The results showed that one of the news stories led in certain parameters, and the other one excelled in others. The human story led in such categories as “well-written”, “pleasant to read”, etc. The robot news story won the categories “objectivity”, “clear description”, “accuracy”, etc. So humans and robots tied again.
But the most important thing the Swedish study revealed is that the differences between the average text of a human writer and a cyber-journalist are insignificant. The distinction between human-written and robot-written texts is approximately the same as between texts written by different humans. They both can be accepted by an editor.
This is a crucial argument for assessing the future and even the current state of robo-journalism. Cyber-skeptics have frequently argued that robots cannot write better than humans. But this is the wrong way to approach the issue. As professor Clerwall tells Wired, “Maybe it doesn’t have to be better—how about ‘a good enough story’?” [38].
In the New York Times’s article “If an Algorithm Wrote This, How Would You Even Know?” Shelley Podolny states that, “Robots make pretty decent writers” [27]. Indeed, even when measuring them by human standards, we can see that they write, if not better, then at least not worse than humans. At the very least, human readers cannot confidently distinguish robot and human writing.
The question about the ability of AI to replace humans in journalism usually comes down to the question “Are algorithms able to write better than humans?” This line of thinking, in fact, is incorrect. To be a journalist, humans do not need to write better than humans (again, what humans? Dostoyevsky? Twain?). Humans only have to write good enough. The same goes for algorithms. To be hired in the media, robots do not have to write better than humans—they have to write good enough. And they do.

4. Counterarguments about “Robots’ Incapability”

The other doubt about AI in journalism relates to machines’ inability to simulate human creative talents.
In the media, the “creativity counterargument” can be represented by doubts in the ability of an algorithm (1) to invent/discover; (2) to distinguish beauty/originality in writing.
Let us examine these doubts.

4.1. A Robot Cannot Invent/Discover

Yes, certainly, it is hard to imagine a robot exclaiming “Eureka!” Serendipity is a human gift. A human can come upon accidental inventions or discoveries for no reason (like when an apple falls on one’s head). At the same time, human inventors are most often capable of recognizing an invention, even accidentally made. Thus, the invention/discovery is characterized by a strange combination of preparedness and impossibility to confidently prearrange. No computational or linear process can lead to it. It is impossible to calculate or code an invention/discovery.
Regarding journalism, invention/discovery can relate to the idea of creating a topic or developing texture. This mandatory part of the job of a journalist, creativity, seems to be impossible for a robot to reproduce.
That is why skeptics would say that robots will not be able, for example, to “smell” a potential sensation in a sequence of same-type events, as a human editor easily does. Moreover, a robot will not be able to decide to overblow a story, as human editors do by intentionally picking an event from a seemingly indistinguishable mass of similar ones.
However, what if, in turn, robots can do something that humans cannot; some other kinds of novelty, if not invention?
This potential novelty, the new knowledge that humans cannot obtain and robots can, relates to the cross-analysis of big data and correlations. The ability to see correlations behind big data is incomprehensible to humans but can be considered as something substituting, for robots, the human faculty of invention/discovery.
We humans value causation over correlation, probably because of our lack of computational skills. Many correlations revealed through the analysis of big data seem strange to us. Tyler Vigen in his book Spurious Correlations [39] presents more than 30,000 correlations extracted from big data that seemingly make no any sense. For example, US spending on science, space, and technology has correlated with the number of suicides by hanging, strangulation and suffocation over a decade, 1999–2009, with astonishing precision. In the same decade, the number of people who drowned by falling into a pool closely correlated with the number of films in which Nicolas Cage appeared. The divorce rate in Maine closely correlated with the per capita consumption of margarine in 2000–2009, and so on.
Always looking for causality but not being able to find it behind correlation, we reject any sense when considering spurious correlations. However, some of these correlations may make someone think twice. For example, the total revenue generated by arcades almost precisely correlates with the number of computer science doctorates awarded in the US in 2000–2009. What if there is something behind such coincidences?
Properly trained algorithms can examine correlations between two variables, but also between three, 33, or 3000 variables. The range of correlations can be enormous and, in fact, unlimited. Anything could correlate to anything through anything else. Unlike Eureka or serendipitous human moments, it is calculable. We cannot even imagine the limits of this intellectual operation, as the size of databases and the processing speed of machines has been growing constantly. But if correlations in massive amounts of data are detected, it may mean something. For humans, it makes sense only if correlations are reduced to causation. For algorithms, causation is not a motive for processing information. AI can process data without searching for causality. This is a type of intellectual motivation that is very distinctive from that of humans.
Those weird but sustained correlations that are so easily detected by algorithms look like magic to us; but it also can be new knowledge or even a new type of knowledge, which is always a sort of magic for those to whom it is incomprehensible.
This reasoning shows that robots have something at their disposal regarding the novelty of knowledge, too. How will they manage this ability to learn new things? Hypothetically, the ability to detect correlations leads to the potential for learning and revealing everything, possibly in a way that we may not understand. It depends on the amount of big data and the processing speed.
Through such a perspective, a robot’s lack of inventive/discovery talents seems like an increasingly unimportant disadvantage. Robots have opportunities to find fantastic correlations that are sometimes of great practical importance for marketing, politics, or media. The world is full of them. They work without an explanation through causality, and bio-journalists are simply unable to see them.
What if an algorithm’s ability to identify correlations compensates or even outdoes the human skills of invention/discovery? Facts derived from big data may be as interesting and irrational as outcomes of human creativity. Another relevant consideration: we are already aware of the potential of human creativity, while big data processing and correlation detection are just at the beginning of their potentially endless path.

4.2. Robots Do Not Understand Beauty or Originality

Yes, it is true; robots do not aim to write in a beautiful manner. Even if they had such a goal, what would be defined as “beauty”? What is that?
However, even if it is impossible to calculate beauty, it is possible to calculate human reactions to it. Humans themselves can serve robots as a new type of servomechanism—beauty-meters.
It is quite possible to detect correlations between texts, headlines, or even certain expressions, on the one hand, and human reactions to them, such as likes, shares, comments, and click-troughs, on the other hand. Also, the “size” of big data matters. The more texts and headlines with human reactions to them an algorithm will obtain, the more precise its “understanding” of the human perception of beauty will be. Even today, robots are able to identify the attractiveness of headlines, topics, keywords, etc., by observing people’s reactions. Editors guess, robots know.
Robots’ capacity to read human reactions will only grow. An algorithm developed by Facebook is already customizing newsfeeds according to users’ reactions, from which personal preferences can be calculated. With the help of biometrics, robots will be able to analyze human physiological reactions to certain semantic and idiomatic expressions, epithets, syntax structures, and visual images.
If algorithms do not have their own senses, humans can serve as receptors that convert sensory reflexes to computable signals. As tools and mechanisms once were the extensions of humans, humans can now be good extensions for machines, allowing machines to enhance their faculties and reach out beyond their “natural” limitations.
Maybe it is time to revisit the idea of who serves whom. Marshall McLuhan, in his Playboy interview, foresaw that “man becomes the sex organs of the machine world just as the bee is of the plant world, permitting it to reproduce and constantly evolve to higher forms” [40]. In Understanding Media, he wrote that, “By continuously embracing technologies, we relate ourselves to them as servomechanisms. That is why we must, to use them at all, serve these objects, these extensions of ourselves, as gods or minor religions. An Indian is the servo-mechanism of his canoe, as the cowboy of his horse or the executive of his clock” [41] (p. 46).
By operating beauty-meters that rely on the measuring of human reactions, algorithms will be able to automatically produce more attractive texts and headlines without understanding the concept of beauty (or originality, or style).
In other words, for every argument about what robots cannot do, there is a more convincing argument for what robots can do instead. In this competition of capabilities, robots and humans also end up in a tie.
The competition has just begun, but it is a tie already. Humans are an old team, while the younger robot team is just making its debut.

5. Roadmap for Artificial Intelligence in the Media—And Beyond

Considering possible AI accomplishments that will allow it to bypass its lack of creativity, it is possible to outline the future developments of artificial intelligence in the media. This will lie in three main interrelated areas: data processing, data accumulation, and understanding human reactions.
(1) Data processing. Algorithms are used to develop ways to manage big data. Their ability to find correlations will in some way replace human creativity. The best human minds are working on it now—they are working on facilitating algorithms for the best possible performance. These brilliant human minds—coders, developers, and engineers from the real and symbolic Silicon Valley—do not care about saving journalism. They aim to implement their innovations and tools without any reservations and often even without any moral consideration. For the replacement of humans by robots, humans (and the smartest humans) will be responsible, not robots.
“If there is a free press, journalists are no longer in charge of it. Engineers who rarely think about journalism or cultural impact or democratic responsibility are making decisions every day that shape how news is created and disseminated”, said Emily Bell, professor at the Columbia Journalism School, in her speech with a title that speaks for itself, “Silicon Valley and Journalism: Make up or Break up?” [42].
(2) Data accumulation. As everything now leaves its footprint on the Internet, the database of all texts and all audiences’ reactions will be able to be collected sooner or later. By monitoring, gathering, and analyzing all journalistic texts and people’s reactions to them, algorithms will be able to calculate which texts with which parameters earn more likes, reads, reposts, and comments.
(3) Understanding of human reactions. Learning of human reactions is one of the most important tasks for artificial intelligence in its efforts to convince us that it can replace us. Understanding human reactions has already become a crucial factor in social media and marketing, led by algorithms. The same goes for the media. By now, algorithms’ ability to read human reactions comes down to the analysis of likes, reposts, time spent with content, etc. But this will change.
Once robots get access to human non-verbal reactions and body language, they will be able to calculate inexplicit reactions instantly. For instance, if someone reads a story about Trump and has certain somatic reactions, technologies already are capable of reading some of them. Webcams can scan the movement of pupils, microphones can hear an increase in breathing frequency, while detectors added to touchscreens could sense the heartbeat or perspiration, and so on.
Even based on relatively “rational” human reactions (likes, reposts, time spent) algorithms can comprehend the audience’s preferences better than human editors. After biometrics is incorporated, algorithms’ understanding of the audience’s preferences will reach the forensic preciseness of a polygraph.
Interestingly, due to the development of editors’ skills required from algorithms in the media, artificial intelligence with biometrics detectors will become a very inquisitive and enormously powerful polygraph for all of society—a global polygraph for the global village, another version of the Orwellian Big Brother, but introduced into the Huxleyan Brave New World. Biometrics development will be not determined by any final cause; it rather will be market-driven. Meaning it will proceed and succeed.
These three aspects of algorithmic development are important not only within the media industry. They also will possibly pave the way for the real—general, or strong, AI to come.
Narrow AI in the media has already arrived. When it integrates all newsrooms tasks, including the editor’s job of assignment allotting, it will approach the idea of goal setting. Smart robo-editors able to pick topics, set tasks, and measure human reactions will obtain power over the entire production cycle in the media. There is little need to say how important this is in terms of agenda setting and ruling over people. Society has already faced similar problems with Facebook algorithms, although this is just a forerunner of future problems.
Ultimately, an intelligence with the ability to set goals for itself is already no longer an intelligence; it is rather a being with its own will.

6. Conclusions. Will They Replace Journalists? Yes

In 2012, Kristian Hammond, co-founder of Narrative Science, predicted that algorithms will write 90% of media content by 2030. As Wired quoted him, “In 20 years, there will be no area in which Narrative Science doesn’t write stories” [14].
The author of that article in Wired, Steven Levy, wrote, “As the computers get more accomplished and have access to more and more data, their limitations as storytellers will fall away. It might take a while, but eventually even a story like this one could be produced without, well, me”.
Interestingly, Hammond also predicted that a computer would write a story worth a Pulitzer Prize “in five years”, which meant in 2017. This did not happen. However, this symbolic act of awarding of the Pulitzer to a robo-journalist will no doubt eventually happen, as it happened in 2016 when the Nobel Prize in literature was awarded to Bob Dylan with an obvious intention to mark some new trend in literature, or rather to mark the death of old literature. This will happen to journalism, too.
Concluding the review of possible AI developments regarding its use in the media, it can be said that human journalists are in a qualitative and quantitative competition with cyber-colleagues. This competition is not at the beginning, as the general public would think. It is moving to the end. In the quantitative contest, bio-journalists have already lost. They are set to lose in the qualitative competition in 5–7 years.
It is also interesting that in the period of transition from a predominantly organic form of journalism to a predominantly cybernetic form of journalism, it will be humans, not robots, who drive the process forward. First, developers, coders, and engineers just do their job well and without any visible reason to stop. Second, it will be editors who will hire robots to write, to moderate comments, and to select topics. In fact, it will be editors who kill the profession for humans and turn it to a function performed by robots.
The reason is simple—the economics of the media business. Newsrooms have to produce as much content as possible in order to increase traffic, views, click-through rates, etc. After switching from the “portioned” production of periodicals/TV/radio to the “streaming” production of the internet, the media have to run more and more stories. Online means non-stop. It is motion for motion’s sake. Media theorist Dean Starkman called this effect the “hamsterization of journalism” [43]. The hamsterization of journalism reduces time spent by the journalist on each story in order to produce more stories: “do more with less”.
Let us imagine that a good article, which means good journalism, can attract thousands of readers. But what if a thousand news stories written over the same time period were able to attract just a hundred readers each? Actually, when traffic is king, editors do not need the best journalists; they need fast journalists… Whom will the editor choose—a capricious, talented (or not so much) journalist with increasing salary demands and three stories per week or a flawless algorithm with decreasing maintenance costs that can produce three stories per minute?
The Associated Press buys the Wordsmith service not because the algorithm writes better than humans. The reason is that the algorithm writes both more and faster. Debates about the quality of the writing are not relevant. Robots will conquer newsrooms not for belletristic reasons, but for economic ones.
If humans still preserve jobs in the media, it will happen not because of economic reasons, but rather because of the social need to utilize people. It happens to many industries: the preservation of jobs becomes more important than increasing efficiency. Socialism is beating capitalism in this way. This is the only considerable reason for people to stay in the media, and it is beyond the context of competition with algorithms.
Thus, robots’ advent in the media is unstoppable. Under these conditions, the most beneficial strategy for newsrooms is to be among the first at the beginning of robotization and the last at the end of it.
For now, the editorial use of algorithms could be an interesting PR strategy that is attractive for both audiences and investors. But when algorithms fill the market, the rare human voice will be in demand amid the chorus of robots.
In this sense, as strange as it seems, human journalism will be particularly valued at the final stages of robotization of the media as a distinct flavor. Moreover, editorial human errors will be particularly valued and attractive, and human-made media will capitalize on errors. That is going to happen at least until robots learn to simulate human errors, too (in order to better substitute humans).
If Wordsmith published 1.5 billion stories in 2016, a part of these stories increased the physical amount of content. The other part, however, already was to replace human writing. This is easy to see as exemplified by Associated Press. By the time AP hired Wordsmith, human reporters were writing 300 earnings recaps per quarter. Wordsmith writes 4400. This gives us a probable quantitative pattern: robo-journalists produce 10-times the total volume of content and drive out that former humans’ share of 300 recaps.
The market is going to demand more and more. Nothing can stop robots from writing as much as they are required to, since the only limit for them is the amount of content that people can read. Even this limit will be removed, once the readers are also robots.


This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.


  1. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  2. McLuhan, M. Understanding Media: The Extensions of Man; McGraw Hill: New York, NY, USA, 1964. [Google Scholar]
  3. De Chardin, P.T. The Phenomenon of Man; Harper: New York, NY, USA, 1959. [Google Scholar]
  4. Military Applications of Artificial Intelligence. Bulletin of the Atomic Scientist, April 2018. Available online: (accessed on 30 June 2018).
  5. Kerner, I. What the Sex Robots Will Teach Us. CNN, 13 March 2018. Available online: (accessed on 30 June 2018).
  6. Owsianik, J. State of Sex Robots: These Are the Companies Developing Robotic Lovers., 16 November 2017. Available online: state-sex-robots-companies-developing-robotic-lovers/ (accessed on 30 June 2018).
  7. Lieberman, H. In Defense of Sex Robots. Quartz, 2 March 2018. Available online: (accessed on 30 June 2018).
  8. Jeremy, B. How The New York Times Uses Software to Recognize Members of Congress. Times Open, 6 June 2018. Available online: software-to-recognize-members-of-congress-29b46dd426c7 (accessed on 30 June 2018).
  9. Bouchart, M. A Data Journalist’s Microguide to Environmental Data. Data Journalism Blog, 15 January 2018. Available online: (accessed on 30 June 2018).
  10. Meyer, R. How a California Earthquake Becomes the News: An Extremely Precise Timeline. The Atlantic, 19 March 2014. Available online: 2014/03/how-a-california-earthquake-becomes-the-news-an-extremely-precise-timeline/284506/ (accessed on 30 June 2018).
  11. Oremus, W. First Earthquake Report Written by a Robot. Screenshot. Source: The First News Report on the L.A. Earthquake Was Written by a Robot. Slate, 17 March 2014. Available online: (accessed on 30 June 2018).
  12. Schwencke, K. How to Break News While You Sleep. Source, 24 March 2014. Available online: (accessed on 30 June 2018).
  13. The Homicide Report. The Los Angeles Time. Available online: (accessed on 30 June 2018).
  14. Levy, S. Can an Algorithm Write a Better News Story than a Human Reporter? Wired, 24 April 2012. Available online: story-than-a-human-reporter/ (accessed on 30 June 2018).
  15. Globe and Mail to Tap into Online Data to Help Reshape Daily Newspaper. Canadian Press, 6 September 2017. Available online: reshape-daily-newspaper/ (accessed on 30 June 2018).
  16. Landers, E. Huffington Post to Ban Anonymous Comments. CNN, 22 August 2013. Available online: index.html (accessed on 30 June 2018).
  17. Gross, D. Online Comments Are Being Phased out. CNN, 21 November 2014. Available online: (accessed on 30 June 2018).
  18. LaBarre, S. Why We’re Shutting off Our Comments. Starting Today, Will No Longer Accept Comments on New Articles. Here’s Why. Popular Science, 24 September 2013. Available online: (accessed on 30 June 2018).
  19. Erickson, T. Will Comment Sections Fade away, or Be Revived by New Technologies? MediaShift, 19 January 2018. Available online: (accessed on 30 June 2018).
  20. The Coral Project. Available online: (accessed on 30 June 2018).
  21. The Washington Post Launches Talk Commenting Platform. WashPost PR Blog, 6 September 2017. Available online: (accessed on 30 June 2018).
  22. New York Times: Using AI to Host Better Conversations. Blog Google. Available online: (accessed on 30 June 2018).
  23. Automated Insights. Available online: (accessed on 30 June 2018).
  24. Roose, K. Robots Are Invading the News Business, and It’s Great for Journalists. New York Magazine, 11 July 2014. Available online: (accessed on 30 June 2018).
  25. Graefe, A. Guide to Automated Journalism. Columbia Journalism Review, 7 January 2016. Available online: (accessed on 30 June 2018).
  26. Ulanoff, L. Need to Write 5 Million Stories a Week? Robot Reporters to the Rescue. Mashable, 2 July 2014. Available online: (accessed on 30 June 2018).
  27. Podolny, S. If an Algorithm Wrote This, How Would You Even Know? The New York Times, 7 March 2015. Available online: (accessed on 30 June 2018).
  28. Allen, R. The AI Entrepreneur’s Moral Dilemma. Machine Learning in Practice Blog, 12 July 2017. Available online: (accessed on 30 June 2018).
  29. Up against the Paywall. The Economist, 19 November 2015. Available online: (accessed on 30 June 2018).
  30. Quill. Narrative Science Web Site. Available online:  (accessed on 30 June 2018).
  31. Morozov, E. A Robot Stole My Pulitzer! How Automated Journalism and Loss of Reading Privacy May Hurt Civil Discourse. Slate, 19 March 2012. Available online: (accessed on 22 July 2018).
  32. Simonite, T. Robot Journalist Finds New Work on Wall Street. Technology Review, 9 January 2015. Available online: (accessed on 30 June 2018).
  33. Marr, B. Another Example of How Artificial Intelligence Will Transform News and Journalism. Forbes, 18 July 2017. Available online: (accessed on 30 June 2018).
  34. Schmidt, C. China’s News Agency Is Reinventing Itself with AI. NiemanLab, 10 January 2018. Available online: (accessed on 30 June 2018).
  35. Smith, S.V. An NPR Reporter Raced a Machine to Write a News Story. Who Won? NPR, 20 May 2015. Available online: (accessed on 30 June 2018).
  36. Did a Human or a Computer Write This? The New York Times, 7 March 2015. Available online: (accessed on 30 June 2018).
  37. Clerwall, C. Enter the Robot Journalist. Users’ perceptions of automated content. J. Pract. 2014, 8, 519–531. [Google Scholar]
  38. Clark, L. Robots Have Mastered News Writing. Goodbye Journalism. Wired, 6 March 2014. Available online: (accessed on 30 June 2018).
  39. Vigen, T. Spurious Correlations; Hachette Books: New York, NY, USA, 2015. [Google Scholar]
  40. McLuhan, M. The Playboy Interview. Playboy Magazine, March 1969. [Google Scholar]
  41. McLuhan, M. Understanding Media: The Extensions of Man; The MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  42. RISJ Admin. Silicon Valley and Journalism: Make up or Break up; The Reuters Institute for Study of Journalism, University of Oxford: Oxford, UK, 2014; Available online: (accessed on 30 June 2018).
  43. Starkman, D. The Hamster Wheel. Why Running as Fast as We Can Is Getting Us Nowhere. Columbia Journalism Review, September/October 2010. Available online: (accessed on 30 June 2018).
Back to TopTop