Robot Reporters or Human Journalists: Who Do You Trust More?
Do you trust robot reporters?
October 24, 2014


This article summarizes a research paper presented at the 2014 Computation + Journalism Symposium. Hille van der Kaa is a lecturer at Tilburg University in the Netherlands; Emiel Krahmer is a professor at Tilburg University. See the full collection of research summaries.


By Hille van der Kaa and Emiel Krahmer

The public trusts news stories written by machines just as much as those written by human journalists, but journalists don’t feel the same way, according to newly released research from the Netherlands.

A study we conducted at Tilburg University found journalists perceive journalists to be more trustworthy news sources than the computer programs that increasingly are spitting out automated articles.

This is perhaps not surprising since the rise of automated journalism worries many journalists who fear that robot writers will one day replace them. At the same time, machine-generated news evokes curiosity among many readers who wonder about its accuracy.

A fair amount of research already exists assessing the quality of text in news generated by both computer algorithms and humans. Our research team decided to explore automated news-writing systems from a different angle – namely, perceptions of their credibility by assessing whether readers perceive robot-written news articles to be more or less trustworthy than those authored by humans.

To answer the question, we showed machine-written news articles to hundreds of news consumers, including some journalists. Some people were told the article was written by humans, even though all were generated by computers. Others were told a journalist had written it. Then we asked them to evaluate the stories for various factors.

Among news consumers, we found no difference in perceptions of credibility or expertise of the contents of articles or their source—news consumers perceived a computer writer to be as trustworthy as a journalist, and to possess equal expertise. Journalists, however, respectively disagreed. They perceived journalists to be more trustworthy than computer writers. Yet they gave computers more credit for expertise than general news consumers did.

As far as we are aware, the only previous study that addressed how audiences perceive the credibility of machine-generated news is one published early in 2014 by Christer Clerwall, from the Department of Media and Communication Studies at Karstad University in Sweden.

Building on his research, we decided to look more systematically into perceptions of automated articles. Our study not only searched for differences and similarities between journalists and news consumers in perceived credibility and expertise of the news source, it also examined perceptions based on the topic of a story.

Before explaining our experiment in greater detail, it’s worth taking a look at how automated writing systems work.

What computer writers and journalists have in common

The automation of journalism has entered a new phase with the rise of computer software programs that generate news articles. ‘Robot writing’ or ‘algorithmic writing’ comes from the field of Natural Language Generation (NLG), which is the process of automatically creating natural language text based on non-linguistic input.

News writing relies on a basic formula, and there are key elements to every news story. These key elements of the work of a journalist show strong similarities with the tasks of a robot writer.

An NLG system should first determine which information is expressed (‘research,’ often called ‘content selection’ in the context of NLG). Second, it must organize the available information and determine a structure for the text (‘selecting,’ or ‘text planning’ in NLG terms). Furthermore, it should determine which information is placed in any sentence and should choose the right words to express the information in the right way (‘structuring’ or ‘sentence planning’). Finally, it must create expressions to display, as well as grammatical sentences. This set of tasks ultimately leads to a grammatically correct and clear text (‘writing’/‘linguistic realization’).

The first NLG systems produced very simple texts with little or no variation. However, over the years more linguistic insights were included and techniques were developed for generating more varied texts.

Nowadays, companies such as Narrative Science and Automated Insights create computer-written texts that are far more profound and ‘human-like’ than the first NLG systems. It is even hard to see the difference between human-written and computer-written articles, as Clerwall found in his study. He asked research subject to assess whether a computer or a human had written specific articles. Many subjects incorrectly guessed that a human had written stories that were actually generated by software.

Can robot writers replace journalists?

The research by Clerwall and others naturally leads to this question: If the difference between journalistic content produced by software and human reporters is not evident, will robots replace journalists?

A large part of the answer is based on economics, that is, whether it is cheaper to hire and train a person or to create and maintain a software program. The economic decision usually depends largely on the volume of text produced.

Clerwall concluded that successful applications of robot journalism will free resources, allowing reporters to focus on assignments they are more qualified for, and leaving the descriptive summaries to the software.

Cost is not the only factor, however. In some cases, it is technically impossible to generate the required texts with current automated-writing technology. In these cases, manual production is the only option. Some researchers state that machines do not have the creativity to avoid clichés and add humor, although others might disagree. Moreover, some claim machines do not have the flexibility or analytical skills needed to write non-routine stories.

On the other hand, automated writing techniques may be preferred over manual document creation because they can increase accuracy and reduce updating time. In addition, automated techniques enable personalized news presentations, in which different versions of the same article can be generated with specific audiences in mind.

But news isn’t just about information or economics. It is also about trustworthiness. How do audiences perceive the rise of automated writing systems?

The experiment and findings: Same stories, different “authors”

To find answers, we used two news articles generated in Dutch by a software program. One reported on a sports event (results of a football match); the other explored a finance topic (stock prices). We created two versions of each story, with the only difference between them being how the source or author was described, as either a computer or a journalist. All the contents and sentences of the stories were otherwise the same.

We randomly showed a story to 232 native Dutch speakers (among them 64 journalists) and asked them to evaluate the perceived expertise and trustworthiness of the news writer and the contents of the story.

As previously stated, our study found no differences in the perceptions that news consumers held regarding the credibility of machine-written stories versus articles they thought were created by humans.

But this was not the case among the journalists, who perceived journalists to be more trustworthy than their computer “colleagues.”

Further, journalists gave computers a higher expertise rating than regular news consumers did. Several journalists left remarks, such as “This is actually not bad for a computer.”

Finally, the research also suggests that story topic has an influence on a news item’s perceived trustworthiness. Overall, respondents perceived the trustworthiness of the sports article to be lower than that of the finance article.

Unanswered questions: more research needed

Our results leave us with new questions:

  • Do journalists overestimate their level of trustworthiness compared to a computer writer? Our results seem to suggest this. And if that is true, what are the reasons?
  • Does this create an increased risk that journalists will be displaced by the rise of automated journalism, since they might not see the need to expand their repertoire to include more areas where computers can’t compete?
  • Why do journalists differ from news consumers in their perception on the level of expertise? Is it because of their training, ethics, a greater awareness of robot reporting — or something else?
  • Since the topic of the text seems to influence the results, what subjects do people perceive algorithmic approaches as having more credibility?

 

New research is currently being developed to answer these questions.

Comments

Leave a Comment