Give Journalists the Right Metrics, Not Pay-for-Performance
April 15, 2014
Tony Haile

That there is even a debate about whether newsrooms should have access to metrics is in some ways surprising. Of all the professions, journalists are some of the most inquisitive, tenacious and focused upon the impact of their work. Journalists will casually but repeatedly ask people they meet if they saw their piece; they’ll follow every follow-up and reference every reference.

When some years ago the New York Times’ Martin Nisenholtz said that he would be thrown out of the newsroom if he suggested they use real-time data, those same journalists were already obsessively checking the most-emailed list to see what was connecting with their audience. Whether it comes in a dashboard or not, every conversation, link and glowing tweet is data that helps build the editorial gut.

Yet here we are today in the midst of a continuing debate in newsrooms over who should be able to see audience metrics and what effect this data might have on news.

In the best newsrooms, data is a trusted feedback loop that informs and educates. It can tell when the audience is smarter than you thought; it can surprise and confound heuristics; and it can make journalists better.

However, when used poorly, metrics can lead to low morale in newsrooms and justify fears that the desire to create great or important content will be sacrificed in favor of cat videos and an endless stream of Miley Cyrus stories. The critics (and anyone who saw MTV’s Video Music Awards last year) are right to fear this.

To be clear, healthy and unhealthy uses of data are not binary states so much as a continuum on which almost all modern newsrooms sit. Two key factors indicate where on the continuum any particular organization will perch:

  1. Does the newsroom focus on metrics that align with good content and long-term audience growth, or on metrics that focus on link optimization and short-term revenue growth?
  2. Does the newsroom frame metrics as data intended to support and illuminate, or as a yardstick against which journalists will be paid?

Getting to the healthy end of the continuum starts with picking the right metrics, and then avoiding the temptation to pay journalists based on those metrics.

Picking the Right Metric is Harder Than It Looks

The dominant metric associated with traffic-chasing is the pageview. As News Corp.’s Raju Narisetti points out in an excellent Poynter piece, the pageview has the most direct relation to advertising revenue, so it is no wonder that executives attempting to align editorial and business goals find it a seductively simple metric.

However, the metrics that seem to make the most sense at first glance can have worrying effects.

In the 1990s, some U.S. hospitals decided to measure performance based upon the mortality rate at each location. What could be simpler than to judge a hospital on how many patients get better? However, it turned out that the easiest way to improve mortality rates was to stop admitting the sickest patients, stop taking on challenging cases and stop performing experimental treatments or surgeries. A simple, direct metric that should have been an excellent way to measure success turned out to promote exactly the opposite of what the hospital was there to do. Luckily, the plan was swiftly shelved.

In the same way, the pageview is a simple metric that directly aligns with short-term revenue but poses problems for newsrooms. It is problematic both because it is a simple metric unable to measure complex goals and because it measures the wrong thing.

It’s worth remembering that the pageview is not a measure of content at all. It is a measure of the link to that content. It increments before the content has even loaded on the page. What was written and whether the visitor likes it or loathes it, reads it or does not, has no impact on the pageview tally.

What we choose to measure defines what we aspire to. It is thus unsurprising that when a newsroom prioritizes a measure of link performance, that newsroom learns how to optimize links, but not content. Instead of metrics teaching the primacy of good content, we get clickbait and slideshows. This misalignment manifests as a schism in which it feels like one must choose between metrics or mission. No wonder some managing editors feel like they must keep such data under lock and key.

Just as with hospitals, newsrooms are mission-driven businesses that have more complex standards of success than just profit. That complexity requires a more thoughtful approach than simply Revenue=Impressions=Pageviews. Otherwise, like the oily salesperson who wrings every last dollar out of your visit but ensures he will never see you again, media companies risk destroying their ability to build a long-term business in exchange for a few additional dollars today.

As Narisetti points out, there are more nuanced metrics that arguably speak to long-term growth; he highlights pageviews-per-visit as well as the number of return visitors. However, best practices suggest that they should be balanced with other indicators in order for these metrics not to lead one astray.

For example, if pageviews-per-visit is viewed on its own, the extreme but logical takeaway might be that the best way to write a 300-word story is to put it across a 70-page slideshow. When balanced against average engaged time per page, it becomes a far more useful measure of audience engagement.

Making Sense of Complex Metrics

The challenge for newsrooms using multiple metrics like this is that it sacrifices simplicity. It can be hard for an over-worked and under-caffeinated editor to parse multiple conflicting metrics to understand what to do. Hence what generally happens is that those in the back-office get the luxury of analyzing and balancing multiple metrics, while those whose job it is to create content and build audience find themselves lapsing into relying on an unhealthy but simple metric.

However, a number of news organizations have begun identifying innovative metrics that have the sophistication to align with complex goals yet are simple enough to use on the front lines. These metrics tend to have two qualities:

  1. They measure the performance of content, not the link to it.
  2. They tend to be combinatorial: one number influenced by two balanced behaviors, where both are needed to perform well.

Effective examples are those that balance the number of people and the amount of time they spend reading into a single metric, such as concurrent users or ESPN’s “average minute audience” (AMA.) With these metrics, an article that holds readers for an average of two minutes and draws 2,000 pageviews can outperform one that draws 10,000 pageviews but has them bailing after 20 seconds.

While the 10,000-pageview article may seem more valuable because it boosts the daily pageview tally, over the long term it’s not. If you can hold someone on your site for three minutes, they are twice as likely to return as if you hold them for one minute. That audience growth over time will have a far more significant effect on revenue than the pageview delta of that day.

Similarly, there are historically-focused metrics that record the total amount of attention an article accrues over its life, such as Upworthy’s “attention minutes” or Chartbeat’s “engaged time” (both have identical methodologies).  Like concurrents and AMA, these metrics more closely align with the mission of a newsroom, punish clickbait and give both thoughtful long-form pieces and popular short-form articles the opportunity to shine.

Metrics as Information, Not Incentives

However, even the best, most thoughtful metrics can still screw up a newsroom if they are applied as a cudgel for compliance instead of a conduit of education. That often happens when metrics become the basis for incentive pay plans.

Some media operations see a wonderful logic to aligning revenue and editorial goals more deeply by linking journalists’ pay to their production of a metric. The most recent example of this was the Oregonian’s leaked deck on paying for performance. As logical and well-intentioned as this seems, incentive plans often counteract the positive effects and purpose of metrics, particularly when the people being paid on metrics are the ones most able to game them.

In 2001, the Atlanta school district brought in a pay-for-performance program that incentivized teachers based upon the test scores of their students. Over the next few years the region saw a dramatic increase in student performance on those tests that culminated with Atlanta Schools Superintendent Beverly Hall being named National Schools administrator of the year in 2009. Four years later, Hall and 16 others were indicted on racketeering charges and 80 educators pled guilty to a massive cheating scandal that had teachers holding “cheating parties” in basements as they rewrote exam papers.

While extreme, what happened in Atlanta fits a pattern where incentive plans undercut the goals they seek and encourage gaming the system. While they may increase the quantity of work produced, they do not have the same impact on quality. In fact, pay-for-performance systems have been found to diminish intrinsic motivation and squash creativity in the workplace. In part, that’s because people lose self-determination as they now feel compelled to do what they once wanted to do — and their need to always hit the quota means that they stick with ‘what works’ and stop taking risks and exploring.

Most journalists don’t need external motivation to want to create great content, and they don’t go into journalism for the money. In some cases, such as a massive-contributor model like Forbes, there may be little alternative but to pay based upon metrics. However, in the long run, incentive plans can lead journalists to stop relying on metrics as a trusted feedback loop and start seeing them as a cruel judge they need to satisfy.

A homepage editor of a U.S. metro daily recently told me that she did not care if people read the content as long as they were clicking on enough pages. She didn’t seem very happy with her job. Not surprisingly, the site had major issues driving external traffic. Their incentive plans had them so focused on generating more clicks from a diminishing core audience that there was little worth sharing or linking to.

The lesson is clear: If you want newsrooms to embrace metrics, don’t set them to mining pageviews with pay models that assume they must be compelled to create. Give them the right metrics framed in the right way for the right goals, and trust in their internal desire to do a great job.

Some journalists wish this debate over metrics would stop because all that really matters, at least to them, is to write the important stories. Who needs data for that?

However, the job of a journalist is not to write the important stories. It is to communicate them. We don’t get to pat ourselves on the back because we wrote about Afghanistan if a poorly-worded headline or a buried lede means we did not get our message across. Nor can the writer drafting impenetrable, unseen prose on important issues exercise moral superiority over the hack peddling pablum with a golden headline. Both are failing in their job as journalists.

If we’re writing about something worth writing about, then we should want to know where our message has fallen short and then adapt. That’s what good metrics enable. After all, our goal is not simply to write, but to be read.