Disclaimer: the following is simply a hypothesis, a story if you will. I submit it to you with no pretension of either originality or authority.
When content companies look at their incoming traffic, they divide them into two general categories–referral traffic and direct traffic.
They’re both fairly self-explanatory. Referral traffic comes from people who found you from somewhere else. The biggest category of this in general is search traffic, but links from Facebook or Twitter, or someone’s blog are also typical sources.
Direct traffic, as the name implies, is made up of people who go directly to the site. Often this means that they are loyal followers of your content.
A loyal user is much more valuable than someone who finds an article of yours from Google, gives it a look, and then never comes back again. They’re more valuable in the quantifiable sense that they see your ads a lot more than that drive-by search user, but they’re more valuable in less obvious ways as well.
The vast majority of the traffic to the big content sites is referral traffic. While the individual user may be less valuable than the individual in direct traffic, the total amount of revenue from referral traffic is much larger. This is the reason that the internet is rife with SEO spam and headlines that are more likely to get clicked if tweeted. Search traffic alone is a gigantic pie, and empires have been built on it alone, with practically zero direct traffic.
However, having a loyal user base that comes to your site regularly is extra valuable precisely because it is likely to get you more referral traffic. Consider: loyal users are more likely to share a link to content on your site from Twitter, Facebook, Tumblr, Redditt, or a good old-fashioned blog. This is exactly the kind of behavior that generated referral traffic–both directly from people clicking those links, indirectly from those people possibly sharing the links themselves, and yet more indirectly if they link from their blogs and thus improve your Google ranking.
Of course, even within your loyal base there is variation in how valuable a particular user is. Guy who has read the Washington Post for forty years and has just moved online is less valuable to the Post than someone like Mark Frauenfelder, who might link to one of their articles on Boing Boing, improving their Google ranking further and sending some of his traffic their way. But it’s still useful to think broadly about direct traffic vs. referral traffic.
The Porous Wall
Back in March, the New York Times launched something like a paywall. There are numerous particulars and exceptions, but the long and short of it is that someone like me, who only ever visits the Times when someone I know links to it, faces no wall of any sort. Meanwhile, someone who has loyally visited the Times every day for years will have to pay if they want to see more than 20 articles a month.
At the time, Seamus McCauley of Virtual Economics pointed out the perverse incentives this created: the Times were literally punishing loyalty without doing anything to lure in anyone else. Basic economic intuition dictates that this should mostly result in reducing the amount of direct traffic that they receive.
The Times did spend a reported $40 million researching this plan, and while I’ll never be the first person to claim business acumen on the part of the Times, you have to think they did something with all that money. As usual, Eli had a theory.
[blackbirdpie id=”52741320770469891″]
Imagine, for a moment, that all of the Times’ direct traffic was composed of individuals who were perfectly price inelastic in their consumption of articles on nytimes.com. That is, they would pay any price to be able to keep doing it. Assume, also, that all of the Times’ referral traffic was perfectly price elastic–if you raised the price by just one cent, they would abandon the site entirely. The most profitable path for the Times in this scenario would be, if possible, to charge direct traffic an infinite amount to view the site, while simultaneously charging the referral traffic nothing so they keep making the site money by viewing ads.
The reality is a less extreme dichotomy–though I wouldn’t be surprised if a significant fraction of the Times’ referral traffic did vanish if they tried to charge them a penny. Still, the direct traffic, while undoubtedly less elastic than the referral traffic, is unlikely to be perfectly inelastic.
Getting a good idea of just how inelastic would be a very valuable piece of information for the Times to have, and I think Eli is right that that is exactly what they spent the $40 million on–that, and devising the right strategy for effectively price discriminating between the two groups.
It’s too soon to tell if the strategy will work for the Times, or if it’s a viable strategy for any content company to pursue.
Come One, Come All
2011 also saw the birth of a new technology website, The Verge.
The Verge began after a group of editors from Engadget left, in the spirit of the traitorous eight, because they believed they could do it better than AOL would allow them to. During their time at Engadget, they had developed a following through the listeners of their podcasts and their presence on social media–Twitter in particular. They also were fairly active in the comment sections of their own posts.
In order to bridge the gap between the launch of the new site and the departure from the old one, they set up a makeshift, interim blog called This Is My Next, and continued their weekly podcast. This kept them connected with the community they had built around them and allowed them to keep building it while they were also working on launching The Verge.
There are a lot of things that I really like about The Verge. First, they bundled forums into the site and gave people posting in them practically the same tools that they use for the posts on the main site. The writers themselves participate in the forums, and any user’s post that they find particularly exceptional they will highlight on the main site and in The Verge’s various social network presences.
Second, they do a lot of long-form, image and video rich, niche pieces that may take time to get through but which just have a kind of polish which is rare among web-native publications.
When I told a friend of mine about how much I loved these pieces, he very astutely asked “but it costs like eleven times as much to make as a cookie-cutter blog post, and do you really think it generates eleven times more revenue?”
This question bothered me, because in a straightforward sense he seems to be right. Say that Paul Miller could have written eleven shorter posts instead of this enormous culture piece on StarCraft. There is no way that The Verge made eleven times as much in ad revenue from that post as they would from the eleven shorter posts he could have written.
But posts like that one attract a certain kind of audience. I may not read every long feature piece that The Verge does, but I like that they are there and I read many of them. The fact that they do those pieces is part of the reason that I made them my regular tech news read rather than Engadget or Gizmodo.
In short, the clear strategy being pursued by The Verge is to reward their most loyal audience, even if it doesn’t directly result in more revenue than trying to game search engines. There isn’t always tension between the two strategies–one of the sites features is called story streams, pages like this which make it easy to see how a particular story has unfolded so far. It also is one more page that could potentially show up in Google search results.
Still, having followed the group very closely at Engadget first, and now at the Verge, it seems clear to me that the core mission is to build up the loyal visitors. If a feature piece costs eleven times as much as a shorter one, but makes a loyal reader out of someone who indirectly brings 100 visitors to the site over time by sharing links on Twitter and Facebook, was the feature piece profitable?
The Verge is even younger than the Times’ paywall, so time has yet to tell if their approach is sustainable.
Farming Referral Traffic
At the onset of 2011, a big rumble was building in the tech community about Google’s search quality. Many claimed that it had become filled with spam. Google was not deaf to these criticisms, and argued that spam was actually at an all time low. The problem wasn’t spam, they argued, but “thin content”–and what have come to be called content farms.
The logic of the content farm is that with enough volume, enough of your pages will make it high enough in search results to get you enough referral traffic to make a pretty penny. In short, a content farm focuses its efforts on acquiring referral traffic and foregoes any real effort to build up direct traffic.
This in itself isn’t a problem, of course. If you have 500,000 of the greatest articles ever written by mankind, then you, Google, and search users are all better off if you rank highly in relevant search results. And many companies that took a beating when Google began targeting thin and low quality content for downranking have cried that they were unfairly lumped in with the rest just for having a lot of pages.
The content farm controversy aside, there is a clear and obvious place for a site that has an audience made up almost entirely of referral traffic–reference sites. Wikipedia, for instance, at one point received somewhere in the neighborhood of 90 percent of its visits from referral traffic. Though it is probably the biggest receiver of search traffic, it is not unusual in this regard–there is a whole industry of people whose livelihoods are made or broken by the various tweaks of Google’s algorithm.
The Path Forward
As I said, much remains uncertain. Five or ten years from now we’ll look back and be able to say which experiments proved to be the models for striking the balance between these audiences. For my part, I can only hope that it looks more like what The Verge is doing than what the New York Times is trying.