RSS is a Tool for Living in the Long Tail

Over a year ago, I talked about my information diet and Google Reader was a central part of it. Today, Google has announced that Reader is being shut down. I thought this would be a good opportunity to discuss some of the big changes I’ve made to how I make use of RSS, and to my information diet more generally.

These days I don’t read any big sites at all in my RSS reader; instead I use it to keep up with lower-volume, interesting long tail content.

Drowning at the Head of the Tail

This time last year, something like 300 “items”—blog posts or comics—were going through my Google Reader account each day. And that was after already cutting back a fair amount, after reading Clay Johnson’s book.

This is exactly why RSS reading has never really taken off the way that a lot of other sexy web 2.0 things from the same era did—the people who tried it would subscribe to their favorite websites, see “100+ items unread”, get stressed out about it, and never return.

I always just thought those people were wimps. I was perfectly happy living in a perpetual stream of content; I could glance over most of it and pick out the ones that looked interesting. I was aggressive about marking whole folders as read if I didn’t feel like dealing with it. It worked for me.

After reading Johnson’s book, however, I started to rethink the matter. What was I really gaining by staying on top of everything that was posted by sites like The Verge, Gizmodo, and Boing Boing? Any big story blows up on social media. They put out dozens and dozens of posts a day, most of which I don’t even remember glancing over. I could just as easily check these sites once a day, or every so often, and there’ll be plenty for me to choose from then. Meanwhile, they crowd out the low-quantity feeds that constitute the overwhelming majority of my subscriptions.

So I eventually unsubscribed from all the professional, high-quantity posting sites.

What surprised me was how much of a relief I found it. After defending this approach and saying I could handle it, it felt absolutely awesome to give it up. Some days I don’t even bother to check The Verge. It’s been a pretty big validation of Clay Johnson’s argument.

I don’t use Google Reader anymore—I’ve been using NewsBlur for a few months now—so I don’t know exactly how many items are passing through my account on a daily basis anymore. But it is an order of magnitude fewer than it was a year ago.

RSS Readers Were Made for the Long Tail

Imagine all the time lost going to sites when they haven’t updated. This was the original argument for RSS readers, one that I’ve made throughout my usage of them. With RSS readers, that time is not wasted. You only interact with a site when it has updated.

It’s precisely that argument that demonstrates why it’s so pointless to subscribe to a site like The Verge. You know that The Verge will be updated every day. Even during the slow days of the weekend they’ll update two or three times at least. During peak gadget release season, in the middle of the week, they will sometimes post over 70 times in a single day. If you want to read new posts from The Verge, you can go there. There will be new ones.

Right now I would say that the upper end of posting for the feeds I follow is 10 posts a day, if that. The greatest value comes from the feeds that post once a day, or less than once a day, or even less than once a month. There are feeds I’ve subscribed to that have gone dark for years and then suddenly started up again. It costs me nothing to continue subscribing, and if they start up again it becomes a wonderful surprise.

I currently follow dozens of webcomics. The post rate for these varies from 5 times a week, to 3 times a week, to once a week, to once a month, to whenever the hell the artist feels like updating. There is no way I could keep up with the amount of webcomics I do without RSS. It would become unmanageable.

Despite the fact that Google has abandoned the playing field, I think that RSS readers are a phenomenal way to explore the gems that exist out there on the open web. The stuff that is more personal; the stuff that people are doing because they love doing it.

The head of the tail is basically inescapable; why not work a little harder at mining the long tail?

Stories About Education

My piece this week at The Umlaut was inspired by the ongoing debate about online education. I say “inspired by” because, while it was my intention to write about online education at the outset, that’s not where I ended up at all. I came to feel that the whole debate wasn’t really about Udacity or any of the new sexy education tech of the moment, but rather about a general sentiment that something has gone horribly amiss in the American system of higher education.

Moreover, it became clear to me that there isn’t anything particularly special about the latest online offerings. Cheap, practical alternatives to the college path have existed for a long time now in the form of professional development courses, industry certifications, and vocational schools. For some reason, people tend to look down their noses at these options, if they even acknowledge them as options at all. I decided to make our weird priorities, and the consequences of them, the main thrust of my piece.

It’s always fascinating to me the different stories that we have about why we go through this crazy 16 year process called formal education. One thing I noticed is that proponents of the “online education is going to change everything” point of view tended to all subscribe to the notion that education was about information transfer. Their critics, on the other hand, were much more ambiguous in what they thought education was for—and seemed to lean towards some sort of cultural, rite of passage type argument.

Meanwhile, in economics, you have the signal theory of education. The short version of this is that the content of your education is more or less worthless, it’s really just about sending a signal to the market about what kind of worker you are. One of the biggest proponents of this point of view is Bryan Caplan, who is quite skeptical about online education’s ability to make a dent in the establishment. Unlike most of online education’s critics, he is arguing from a place of cynicism rather than idealism about the nature of education in general.

Information Transmission

The productivity of teaching, measured in, say, kilobytes transmitted from teacher to student per unit of time, hasn’t increased much. As a result, the opportunity cost of teaching has increased, an example of what’s known as Baumol’s cost disease. Teaching has remained economic only because the value of each kilobyte transmitted has increased due to discoveries in (some) other fields. Online education, however, dramatically increases the productivity of teaching.
-Alex Tabarrok, Why Online Education Works

The whole point of learning is that you learn something, right? It’s all about imparting information upon the student. Whether we’re talking about multiplication tables or the date and consequences of the Battle of Hastings, students are—in theory—supposed to walk away from the school year with more information in their brains than they had at the beginning of the year.

If this is your story of education, then brick-and-mortar education must surely be doomed. In the essay linked to above, Tabarrok points out three reasons why this would be so:

I see three principle advantages to online education, 1) leverage, especially of the best teachers; 2) time savings; 3) individualized teaching and new technologies.

The first point goes to the fact that a single recorded lecture or piece of writing can now be viewed by anyone anywhere in the world that has access to the Internet. Tabarrok’s TED talk has been watched 700,000 times, several hundred thousand times more than his non-recorded, un-uploaded lectures ever will be. This is the blockbuster effect. In theory, the very best lectures by the very best teachers can now dominate the education of everyone in the world.

The time savings comes from the fact that with a recorded lecture, you can be as concise as possible, since people who don’t get it the first time have the luxury of rewatching it as many times as they want. Meanwhile, the people who get it the first time can move right on to the next lecture, a convenience not afforded students in a classroom who have to wait while the teacher answers their classmates’ questions.

The individualized teaching comes from the fact that teachers can outsource the lecture part of education to online resources and spend the time they would have been lecturing answering individual questions instead, and talking one on one with students. This is what is called flipping the classroom.

Clay Shirky also subscribes to the education as information transmission story. In his post which kicked off a huge debate about online education and education in general, he compares Udacity and MOOCs to Napster and the MP3. Infinite copies can be made, it can be transmitted over the Internet, and it’s available at no charge. In a response to critics of the piece, he bluntly states what he believes to be the chief purpose of education:

What we do is run institutions whose only rationale—whose only excuse for existing—is to make people smarter.

I am highly skeptical of the information transmission story of education. I’m sure that some information does get transmitted, though, as Caplan points out, most students forget most of it, and it doesn’t even take very long. Moreover, as I outline in my Umlaut article, there have been more cost-effective methods for transmitting information to students for decades, and these have only multiplied in quantity and variety, and lowered in cost.

Yet still we treat the 16 year path from K-12 to a bachelor’s degree as the proper way of doing business. Does it really take 16 years for us to convey all the information we want conveyed to our youth, even without digital technology? I find this story hard to swallow. Something else must be going on here.

Manufacturing Persons of Quality

The classroom has rich value in itself. It’s a safe, almost sacred space where students can try on ideas for size in real time, gently criticize others, challenge authority, and drive conversations in new directions.

-Siva Vaidhyanathan, A New Era of Unfounded Hyperbole

My suspicion is that this whole formal education thing is just a case of cultural snobbery. K-12 makes a certain sense—there’s certainly a lot of value in promoting literacy and basic math skills. I don’t think there’s any reason why that should take until we’re 18, but there you go.

But college in particular was never about information transmission, back before the modern push to universalize attendance to it. College was where Persons of Quality went to learn how to sound intelligent when talking with other Persons of Quality.

We talk about college as if it’s the only thing standing between the average student and a lifetime of unemployment—or worse, a lifetime as a cashier or burger flipper at McDonald’s. But I think on some deeper level, people just think there is something wrong with the kind of people who don’t go to college. Or that college imbues its students with something glorious and unquantifiable that it is unjust to deny anyone access to.

But if you don’t want to work at McDonald’s, you could become, say, an electrician. According to the BLS, this requires 144 hours of technical training and then four years of paid apprenticeship, after which the median electrician makes $48,250 a year–enough to live comfortably. And this is just one example–there are tons of paths that cost enormously less in both money and time to avoid the burger-flipping or gas station clerk outcome, if avoiding that sort of work is your goal.

But if it’s not a lawyer or a doctor, we sneer at vocational education.

In the week leading up to submitting my piece at the Umlaut, I read a lot of responses to Tabarrok and Shirky’s arguments. One thing I found odd was that these critics seemed to have a less clear idea of just what education was for than Shirky or Tabarrok did. However, I detected cultural snobbery in the background. Take the Siva Vaidhyanathan quote above. Or the following:

As a student, when I was at Ohio State I took a class with Jennifer Cognard-Black, a graduate student. I had been reading George Orwell’s letters. I just went to her office hours and I was like, I’ve got these letters, aren’t they cool? And I had nothing to say! I was really just thrashing around, [it was] incoherent excitement. And she said, “So, what are you interested in, which part of it?” I don’t even remember what we said. It wasn’t that this was an intellectually transformative experience; it was that I was taken seriously as a thinker, and it validated the entire idea of being excited about George Orwell’s letters. It sounds like a small thing, but it wasn’t; it was huge.

That’s Aaron Bady, quoted in the Awl. Unlike most of the participants in this debate, Bady seems refreshingly clear that we don’t really know what this is all for:

The thing is, when you frame this as, “what does this give them for the rest of their lives?” one never really knows, and I think that’s the point; there is something, but it’s something we’re all discovering together. When we reduce education to job training; when we reduce it to, “we need X skills, so let’s do whatever causes X skill to come out,” you really close down all the possibilities.

So college is a place where you can be taken seriously as a thinker, but we don’t really know what value that will have for the rest of your life. But if you hone in on one particular thing, you’re being closed-minded about all the other possibilities.

As someone who spends a lot of time being excited by any number of nerd-equivalents to George Orwell, I feel confident saying that I’ve been able to live Bady’s experience over and over for something like half of my life. I did it online. When I was a teenager, I went from forum to forum, raging about politics and philosophy to anyone who would engage. And engage they did. I found plenty of people to share my excitement over esoteric intellectual subjects with over the years. After forums, it was blogs, which are obviously still a big part of it. Then Facebook and Twitter and the new wave of social tools grew up and it became that much easier to connect with others who would share my excitement.

So finding a group where you can be “taken seriously as a thinker” is easier than it has ever been. And I’m not sure it’s worth cramming billions of dollars in subsidies and encouraging people to take on hundreds of thousands of dollars in student loans to keep an open mind about what college might be about.

It would be unfair not to link to Bady’s own critique of Shirky here, which is much more targeted to Shirky’s specific arguments.

But from Bady, Vaidhyanathan, the author of the Awl piece, and elsewhere, I’ve sensed an implicit cultural judgment in the same family as complaints that we’re reading tweets rather than Tolstoy. I always wonder–why Tolstoy? A lot of people are reading Harry Potter, for instance. Are they somehow spiritually inferior if they haven’t also read Tolstoy, or some great classic?

I don’t meant to imply that there is no value in Tolstoy or in the great classics. I do mean to imply that obtaining that sort of value probably isn’t actually worth the enormous amount of money that is currently being spent on it by governments, charities, and private individuals. Especially when you can read Tolstoy for free online!

Signaling Theory

According to the signaling model, employers reward educational success because of what it shows (“signals”) about the student. Good students tend to be smart, hard-working, and conformist – three crucial traits for almost any job. When a student excels in school, then, employers correctly infer that he’s likely to be a good worker. What precisely did he study? What did he learn how to do? Mere details. As long as you were a good student, employers surmise that you’ll quickly learn what you need to know on the job.

-Bryan Caplan, The Magic of Education

Signaling theory in economics was pioneered by Michael Spence. The basic idea is that there are people with desirable qualities for employers, and people without them, but on the surface they seem identical. However, it turns out that obtaining a college degree costs less for the people with the desired qualities than the people without them. Maybe this is because those people tend to come from middle class families, and therefore have the financial support of their families. Or maybe this is because the people without the desirable qualities don’t have the discipline to make it through four years of coursework.

Whatever the reason, the cost differential is all that matters. The students could learn nothing but garbage for four years, but if they can get to the diploma at a lower cost than people without the qualities that are valued in the market, they will increase their lifetime earnings by getting the diploma.

Note that education policy understandably aims to lower the cost of access for everyone. If education is largely signaling, then this is extremely wasteful. Since the cost differential is what matters, then lowering the costs for everyone just raises the bar for obtaining the differential. In practice this means spending more years in college than people would have under a less generous policy. So, if signalling theory is what explains most of why people go to college, then our current policy is wasteful both in spending and in that it encourages people to waste their time for longer.

Caplan brings a lot of empirical arguments to bear to defend the signalling theory of education. Most of these are intended to demonstrate how worthless an education actually would be in the market, if all we cared about was the actual content of it. Consider the following:

Yes, I can train graduate students to become professors. No magic there; I’m teaching them the one job I know. But what about my thousands of students who won’t become economics professors? I can’t teach what I don’t know, and I don’t know how to do the jobs they’re going to have. Few professors do.

Many educators sooth their consciences by insisting that “I teach my students how to think, not what to think.” But this platitude goes against a hundred years of educational psychology. Education is very narrow; students learn the material you specifically teach them… if you’re lucky.

Other educators claim they’re teaching good work habits. But especially at the college level, this doesn’t pass the laugh test. How many jobs tolerate a 50% attendance rate – or let you skate by with twelve hours of work a week? School probably builds character relative to playing videogames. But it’s hard to see how school could build character relative to a full-time job in the Real World.

Caplan makes strong, provocative arguments, and I look forward to his book on the subject. I tend to think that at least part of education must be explained by the signalling model. On the ground, this was certainly a story that my fellow students would often pay lip service to. The story was not so systematic or formal as the actual economic theory of signalling; instead it took the form of the belief that all we really got out of college was a piece of paper that for some reason bestowed magical qualities upon us in the job market. Whether or not anyone really believed that depended on the mood you caught them in, but it was a well circulated story none the less.

I also wonder if there isn’t some marriage of the signalling story and the Person of Quality story to be found. What if what employers really want are people raised with a certain set of values, and going to college demonstrates a commitment to those values?

In the diffusion of innovations literature, new ideas and products spread lightning fast when they reach that big chunk of the population (labeled the “early majority” and “late majority”) where the vast majority of the people involved have very similar characteristics. This sets them apart from “innovators” and “early adopters” who tend to be richer or of higher status on some margin than the majority, and “late adopters”, who tend to be poorer and of lower status than the majority.

What if the chief benefit of universalizing formal western education in this country was that it made everyone a lot more like one another? Just as we’re more likely to marry or befriend people who are more like us, we also may be more likely to hire someone who is more like us, or invest in a company run by someone who is more like us, and so on. Maybe education has almost nothing to do with information transmission, but instead is some mixture of acculturation and signalling?

How Education Has Changed and Will Continue To

The bottom line is that we don’t really know what function education serves. There are a lot of stories and you can put the evidence together in various ways to defend many of them, many that contradict one another.

But it seems clear to me that the way education will change, and has been changing, is clear, regardless of what story you choose to believe.

It will change in the way that all things have changed since the onset of the Industrial Revolution–we will see bigger blockbusters and longer tails.

Let’s say you believe the information transmission story. Then, as Tabarrok pointed out, you will get blockbuster lectures and educational materials; stuff that is seen by an unprecedented number of people around the world who are eager to learn. You will also get long tail effects–a huge amount of variety, some of which only gets seen by perhaps a handful of people but which may nevertheless enrich them intellectually.

Let’s say you’re a believer that the world has been going to hell in a handbasket ever since we all stopped reading Tolstoy. Well, as I mentioned before, now anyone anywhere in the world with an Internet connection can access Tolstoy’s works, for free. And anyone anywhere in the world can write about Tolstoy, and Shakespeare, and how society is going to hell in a handbasket since there are people who would rather read Harry Potter. There will be a long tail of communities populated by people who subscribe to the culture of the Person of Quality.

Caplan is extremely skeptical that online education will have much of an impact if the signalling theory is correct. But there has been a long tail of credentialing for a long time–consider project management certification, or SAS certification, or any number of other industry specific certifications. And Russ Roberts pointed out that homeschooling went from being a marginal activity to gaining acceptance.

Moreover, there’s an argument to be made that our current way of paying for higher education is simply fiscally unsustainable–Shirky makes this case at length. So the nature of the average education may end up changing due to some combination of financial implosion in the traditional sector and innovation on the outside.

Education is already a power law industry, and it will always remain one. It will probably grow even more skewed than it is today. But the particulars are going to change, and the long tail will get longer. On the whole, I am optimistic.

PostScript

After posting these, I received a couple of responses that tell a story of a different sort.

Along the same lines, my father added:

I think Shirky’s right: higher education is like the daily newspaper, a bundle of unrelated stuff. It all makes cultural sense, until it doesn’t. College was a place for the Great Middle Class to park their kids until they figured life out. The cost-benefit of that makes the commitment increasingly untenable…

Rereading The Long Tail

This was officially launch week for The Umlaut, a new online magazine that my friends Jerry Brito and Eli Dourado have started. There are five of us who will be regular writers for it. For my first piece, I thought it might be fun to go back and re-examine The Long Tail almost seven years after it was published.

The Long Tail had a big impact on the conversation around new media at the time, and was very personally significant. The original article was published in October of 2004, a mere month before I began blogging. Trends in new media were a fascination for me from the beginning, so I kept up with Chris Anderson’s now-defunct Long Tail blog religiously.

A 19-year-old and a tad overenthusiastic, I strongly believed that the mainstream media was going the way of the dinosaur and would be replaced by some distributed ecosystem of mostly amateur bloggers. In short, I thought the long tail was going to overthrow the head of the tail, and that would be that. Moreover, I thought that all content would eventually be offered entirely free of charge.

That was a long time ago now, and my views have evolved in some respects, and completely changed in others. I think that the head of the tail is going to become larger, not smaller, and professionals are here to stay–as I elaborate on here. However, I do think that the growth of the long tail will be very culturally significant.

When I began rereading The Long Tail, I expected to find a clear argument from Anderson that he thought the head of the tail would get smaller relative to the long tail. Instead, he was frustratingly vague on this point. Consider the following quote:

What’s truly amazing about the Long Tail is the sheer size of it. Again, if you combine enough of the non-hits, you’ve actually established a market that rivals the hits. Take books: The average Barnes & Noble superstore carries around 100,000 titles. Yet more than a quarter of Amazon’s book sales come from outside its top 100,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is already a third the size of the existing market—and what’s more, it’s growing quickly. If these growth trends continue, the potential book market may actually be half again as big as it appears to be, if only we can get over the economics of scarcity.

Let us unpack this quote a little.

First, Anderson is offering the fact that more than 25% of Amazon’s book sales occur outside of its top 100,000 titles as evidence of the revenue potential for the long tail. But this is very flawed conceptually. At the time of the book’s publication, Amazon sold some 5 million books. If nearly all of the additional revenue beyond the top 100,000 titles was encompassed by the following 100,000 titles, then 4% of Amazon’s titles account for nearly all of its book revenues. And there is good reason to believe that that is exactly how the distribution played out, back then and now.

The fact that 200,000 is a larger number than 100,000 is indeed a significant thing; it shows the gains that a company can make from increasing their scale if they are able to bring down costs enough to do so. But to  claim that this is evidence of the commercial potential of the long tail is flat out wrong. We’re still talking about a highly skewed power law distribution–in fact, an even more skewed power law distribution, as we used to speak of 20% of books accounting for 80% of the revenue, and here we are talking about 4% of the books accounting for something on the order of 99% of the revenue.

This argument appears several times throughout the book, in several forms. At one point he talks about how the scaling up of choices makes the top 100 inherently less significant. Which is true, but it does not make the head of the tail any less significant; it just means that there are a larger quantity of works within that head.

Second, this bit about “if only we can get over the economics of scarcity.” Anderson argues, repeatedly, that mass markets and big blockbusters are an artifact of a society built on scarcity, and the long tail is a creation of the new economics of abundance. This is wrong to its core.

As I argue in my first piece at The Umlaut, we have been expanding the long tail while increasing the head of the tail since the very beginning of the Industrial Revolution. Scale in the upward direction fuels scale in the outward direction. Consider Kevin Kelly’s theory of 1,000 true fans, the paradigm of the long tail success.

Assume conservatively that your True Fans will each spend one day’s wages per year in support of what you do. That “one-day-wage” is an average, because of course your truest fans will spend a lot more than that.  Let’s peg that per diem each True Fan spends at $100 per year. If you have 1,000 fans that sums up to $100,000 per year, which minus some modest expenses, is a living for most folks.

Now ask yourself: how do we get to a world where someone can make a living by having 1,000 true fans, or fewer? Or 1,000 more modest fans, or fewer?

One way we get to that world is through falling costs. If we assume a fixed amount that some group of fans is willing to pay for your stuff, then progress is achieved by lowering the cost of producing your stuff.

Another way is for everyone to get wealthier, and thus be able to be more effective patrons of niche creators. If I make twice as much this year as I did last year, then I can afford to spend a lot more above and beyond my costs of living.

Another conceivable way is sort of a combination of the first two–falling costs for the patrons. If I make as much in nominal terms as I did last year, but my costs of living fall by half, then it is effectively the same as though I had doubled my income.

Put all three of these trends together and you have perfectly described the state of material progress since the onset of the Industrial Revolution. Huge breakthroughs in our productive capacities have translated into a greater ability to patronize niche phenomena.

Obviously the personal computer and the Internet have taken this trend and increased its scale by several orders of magnitude–especially in any specific area that can be digitized. But that doesn’t mean we’ve entered a new era of abundance. The economics are the same as they have always been. The frontier has just been pushed way, way further out.

Moreover, the blockbuster is not an artifact of scarcity. Quite the opposite. The wealthier and more interconnected we are, the taller the “short tail” can be. In my article, I mention the example of Harry Potter, which was a global hit on an unprecedented scale (this Atlantic piece estimates the franchise as a whole has generated something like $21 billion). Hits on that scale are rare, giving us the illusion at any given moment that they are a passing thing, a relic of a bygone era of mass markets. But the next Harry Potter will be much, much bigger than Harry Potter was, because the size of the global market has only grown and become more connected.

Consider Clay Shirky’s observation that skew is created when one person’s behavior increases the probability that someone else will engage in that behavior “by even a fractional amount”. His example involves the probability that a given blog will get a new reader, but it extends to just about every area of human life. And the effect he describes, but does not name, is the network effect–one additional user of Facebook increases the probability that they will gain yet another one, one additional purchaser of a Harry Potter book increases the probability that yet another person will purchase it.

And we know, from the diffusion of innovations literature, that there comes a certain point at which one additional person increases the probability by a lot more than a fractional amount. As Everett Rogers put it:

The part of the diffusion curve from about 10 percent adoption to 20 percent adoption is the heart of the diffusion process. After that point, it is often impossible to stop the further diffusion of a new idea, even if one wished to do so.

Now, if network effects are what create skew in the first place, and we are living in the most networked age in history, how plausible does Anderson’s argument seem that the head of the tail will be of decreasing significance because of new networks?

What Does He Really Think?

Part of what’s frustrating about the book is that Anderson doesn’t really make a solid claim about how big he thinks the head of the tail is going to be relative to the tail. He provides some facts that are erroneous to answering this question, such as the Amazon statistic described above. In some places he seems like he’s saying the head will be smaller:

The theory of the Long Tail can be boiled down to this: Our culture and economy are increasingly shifting away from a focus on a relatively small number of hits (mainstream products and markets) at the head of the demand curve, and moving toward a huge number of niches in the tail. In an era without the constraints of physical shelf space and other bottlenecks of distribution, narrowly targeted goods and services can be as economically attractive as mainstream fare.

The long tail is going to be “as economically attractive” as the head of the tail. That’s what he’s saying, right? If so, then he is wrong, for the reasons described above.

But maybe that isn’t what he’s saying. Consider:

This is why I’ve described the Long Tail as the death of the 80/20 Rule, even though it’s actually nothing of the sort. The real 80/20 Rule is just the acknowledgment that a Pareto distribution is at work, and some things will sell a lot better than others, which is as true in Long Tail markets as it is in traditional markets. What the Long Tail offers, however, is the encouragement to not be dominated by the Rule. Even if 20 percent of the products account for 80 percent of the revenue, that’s no reason not to carry the other 80 percent of the products. In Long Tail markets, where the carrying costs of inventory are low, the incentive is there to carry everything, regardless  of the volume of its sales. Who knows—with good search and recommendations, a bottom 80 percent product could turn into a top 20 percent product.

Here he seems to be saying that the 80/20 Rule will always remain true, but that shouldn’t stop us from realizing how important the long tail is in our lives, and how much more important it will be in the future as we get ever more diversity of choices in the relatively niche. Moreover, companies should continue to extend their long tail offers because, at any moment, one of them might suddenly jump to the head of the tail. So a Kindle book that’s only selling a handful per year may suddenly go viral and make Amazon a ton of money.

If that’s what he believes, then he is correct. But the mixture of the bad accounting of the sort in the top 100,000 books example above, statements such as the one quoted above about what “the theory of the Long Tail can be boiled down to”, and this last quote about the 80/20 rule, force me to conclude that Anderson’s thinking is simply muddled on this particular point.

Credit Where Credit is Due

Finally, if there’s one thing that I think we can all agree with Anderson on, it is that the expansion of the long tail has greatly increased the quality of our lives. Whether it’s people like Scott Sigler who has managed to make a living from his fans, or the passionate community of a small subreddit, there is an ever expanding virtual ocean of choices in the long tail today.

Chris Anderson argued that the fact that something is not a hit of the blockbuster variety does not mean it is a miss. There are some things that are much more valuable to a small group of people than they are to everyone else, thereby precluding their ability to become a blockbuster. There are also some things that might be equally appealing to the same number of people as a blockbuster, but they simply were not lucky enough to be among the few that won that particular lottery.

All of us live in both the head of the tail and the long tail, and I’m glad that Anderson convinced so many of the value of the latter.

Unleash the Practitioners

Richard Dawkins is famously optimistic about human knowledge, especially within the confines of science. He is–understandably–allergic to the brand of postmodernist who believes that reality is simply a matter of interpretation, or cultural narrative. He has a much repeated one-liner that comes off as quite devastating–“There are no postmodernists at 30,000 feet.

It’s quite convincing. Engineers were able to make airplanes because of knowledge that was hard-won by the scientific community. The latter developed and tested theories, which the former could then put to use in order to get us moving about in the air at 30,000 feet. Right?

Wrong.

Historian Philip Scranton has done extensive work demonstrating that the original developers of the jet engine had no idea of the theory behind it, which was only developed after the fact. The jet engine was arrived at through tinkering and rote trial and error.

Dawkins was correct that there is a hard reality that is undeniable, and led to many failed prototypes. But the background story of science that he subscribes to is simply incorrect in this instance. Scientists didn’t develop theory which practitioners could apply; the practitioners invented something that scientists then felt the need to explain.

What’s amazing is how often this turns out to be the case, once you start digging.

Practitioners Elevated Us to New Heights

If there is one book that should be mandatory reading for every student of history, it is Deirdre McCloskey’s Bourgeois Dignity. It lays out in stark fashion just how little we know about what caused the enormous explosion in our standard of living that started over two hundred years ago. She systematically works through every attempted explanation and effectively eviscerates them. Issues of the day seem small when put in the perspective of a sixteen-fold growth in our standard of living (conservatively measured), and the utter inability of theorists to explain this phenomena is humbling.

For our purposes here we focus on Chapter 38: “The Cause Was Not Science”.

We must be careful when throwing around words like science, as it means many things to many people. What McCloskey is referring to is the stuff that generally gets grouped into the Scientific Revolution; the high theory traded by the Republic of Letters.

The jet engine example I mentioned earlier is exactly the sort of thing McCloskey has in mind. Take another example, from the book:

“Cheap steel,” for example, is not a scientific case in point. True, as Mokyr points out, it was only fully realized that steel is intermediate between cast and wrought iron in its carbon content early in the nineteenth century, since (after all) the very idea of an “element” such as carbon was ill-formed until then. Mokyr claims that without such scientific knowledge, “the advances in steelmaking are hard to imagine.” I think not. Tunzelmann notes that even in the late nineteenth century “breakthroughs such as that by Bessemer in steel were published in scientific journals but were largely the result of practical tinkering.”” My own early work on the iron and steel industry came to the same conclusion. Such an apparently straightforward matter as the chemistry of the blast furnace was not entirely understood until well into the twentieth century, and yet the costs of iron and steel had fallen and fallen for a century and a half.

This story plays out over and over again–the hard work of material progress is done by practitioners, but every assumes that credit belongs to the theorists.

It turns out that it isn’t even safe to make assumptions about those industries where theory seems, from the outside, to really dominate practice. What could be more driven by economic and financial theory than options trading? Surely this must be a case more in line with traditional understandings of the relationship between theory and practice.

And yet Nassim Taleb and Espen Gaarden Haug have documented how options traders do not use the output of theorists at all, but instead have a set of practices developed over time through trial and error.

Back to McCloskey:

The economic heft of the late-nineteenth-century innovations that did not depend at all on science (such as cheap steel) was great: mass-produced concrete, for example, then reinforced concrete (combined with that cheap steel); air brakes on trains, making mile-long trains possible (though the science-dependent telegraph was useful to keep them from running into each other); the improvements in engines to pull the trains; the military organization to maintain schedules (again so that the trains would not run into each other: it was a capital-saving organizational innovation, making doubletracking unnecessary); elevators to make possible the tall reinforced concrete buildings (although again science-based electric motors were better than having a steam engine in every building;  but the “science” in electric motors was hardly more than noting the connection in 1820 between electricity and magnetism-one didn t require Maxwell’s equations to make a dynamo); better “tin” cans (more electricity); asset markets in which risk could be assumed and shed; faster rolling mills; the linotype machine; cheap paper; and on and on and on. Mokyr agrees: “It seems likely that in the past 150 years the majority of important inventions, from steel converters to cancer chemotherapy, from food canning to aspartame, have been used long before people understood why they worked…. The proportion of such inventions is declining, but it remains high today.”

In 1900 the parts of the economy that used science to improve products and processes-electrical and chemical engineering, chiefly, and even these sometimes using science pretty crudely-were quite small, reckoned in value of output or numbers of employees. And yet in the technologically feverish U.K. in the eight decades (plus a year) from 1820 to 1900, real income per head grew by a factor of 2.63, and in the next eight “scientific” decades only a little faster, by a factor of 2.88. The result was a rise from 1820 to 1980 of a factor of (2.63) • (2.88) = 7.57. That is to say-since 2.63 is quite close to 2.88-nearly half of the world-making change down to 1980 was achieved before 1900, in effect before science. This is not to deny science its economic heft after science: the per capita factor of growth in the U.K. during the merely twenty years 1980 to 1999 was fully 1.53, which would correspond to an eighty-year factor of an astounding 5.5. The results are similar for the United States, though as one might expect at a still more feverish pace: a factor of 3.25 in per capita real income from 1820 to 1900, 4.54 from 1900 to 1980, and about the same frenzy of invention and innovation and clever business plans as Britain after 1980.

Note that McCloskey is not saying that science hasn’t made any contribution at all, or that the contribution is small. Taleb does not make that claim either. What is at issue here is that the contribution of science to our material well being is not just overblown, but overblown by several orders of magnitude. McCloskey ultimately concludes that “We would be enormously richer now than in 1700 even without science.”

Yet They Are Everywhere in Chains

Alex Tabarrok thinks to road to the innovation renaissance is through focusing such funding on STEM majors and tailoring our patent system so it only provides protection for industries like pharmaceuticals where it appears to make the biggest positive difference. Even Michele Boldrin and David Levines, who otherwise believe in abolishing intellectual property entirely, agree with Tabarrok’s exception. And Tyler Cowen believes that part of what we need to do in order to climb out of the Great Stagnation is elevate the status of science and scientists.

With respect to these distinguished gentlemen, I disagree. The road to greater prosperity lies in breaking the shackles we have increasingly put around practitioners, and elevating their work, and their status.

Whether or not the specific skills implied by a STEM career contribute to progress, it is quite clear that what is taught in the classroom is unlikely to be what is practiced in the field–since the teaching is done by teachers, who are not as a general rule practitioners. And let us return to Scranton, McCloskey, and Taleb: the vast majority of our material wealth came from tinkering that is decidedly non-STEM.

If you want to make progress in pharmaceuticals, don’t do it by enforcing (or worse, expanding) patents, which inhibit trial and error by those who do not hold the patent. Instead, remove the enormous impediments we have put up to experimentation. The FDA approval process imposes gigantic costs on drug development, including the cost of delaying when a drug comes to market and greatly reducing the number of drugs that can be developed. There is an entire agency whose sole purpose is to regulate medical trials.

It is all futile–as I have said before, in the end, the general market becomes the guinea pigs for many years after the drug is available, and no conceivable approval process can change that fact. But if you think differently–if you think theorists can identify what treatments are likely to succeed ahead of time, and are capable of designing experiments that will detect any serious side-effects, then our current setup makes a lot of sense.

But that is not the reality. Nassim Taleb argued in his latest book that we should avoid treating people who are mostly healthy, because of the possibility of unknown complications. On the other hand, we should take way more risks with people who are dangerously ill than our current system allows.

The trend is going the other way. Because we have made developing drugs so expensive, it is much more profitable to try to come up with the next advil, that will be used to ease symptoms of a mild disease but purchased by a very wide market, than a cure for rarer but more deadly diseases. And it doesn’t matter what they try to do, because the ultimate use of a drug is discovered through practice, not through theory. But it does matter, in the sense that we’re currently wasting many rounds of trial and error taking putting people at risk to attempt to make small gains.

Thalidomide remains the iconic example of how this works. It was marketed as an anti-nausea drug but caused birth defects when pregnant women took it. Yet it is widely used today, for treating far more serious  problems than nausea.

You Cannot Banish Risk

Aside from overestimating the abilities of theorists, the reason the discovery process of practitioners has been so hamstrung is because people are afraid of the errors inevitable in a process of trial and error. Thalidomide babies were an error, a horrible one. But there is no process, no theory that will allow us to avoid unforeseen mistakes. The only path to the drug that cures cancer or AIDS or malaria is one that involves people being hurt by unforeseen consequences. As Neal Stephenson put it, some people have to take a lot of risks in order to reduce the long run risk for all of us.

And along with the unforeseen harms, there are unforeseen gains as well. Penicillin, arguably the single greatest advancement in medicine in the 20th century, was an entirely serendipitous discovery.

I do not know if the stories of a great stagnation are accurate, but I agree with Peter Thiel that our regulatory hostility towards risk taking impoverishes us all, and allows many avoidable deaths every year.

The only way to start pushing the technological frontier again like we did at the peak of the Industrial Revolution is to empower the practitioners rather than impair them.

Unleash the practitioners and progress will follow.

When to Medicate

“We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.”

-Nassim Taleb, Antifragility

In an uncertain world, Taleb wants us to stop thinking we know the probabilities and instead think more seriously about payoffs.

Let’s say a new pill comes to market that claims to be able to cure the common cold, quickly and with minimal side-effects. What is the potential payoff from taking this pill? At best, you will end your cold more quickly than you otherwise would have. And at worse?

You may be tempted to say that the downside risk is not very large, as the pill had to go through a test process with the company that developed it, examined by the FDA. The process can take years–surely any problems would have been detected by its completion, right?

Uncertainty and Complexity

Wrong–any test is always going to have limits, by necessity. It might involve only one, two, or three thousand test subjects–whose selection is not truly random. Even if we could treat the statistical results with complete confidence, any effect that only occurs in a tiny fraction of this sample would impact a large number of people once it hits a market of millions. And any effect that doesn’t really visibly show up until a time period longer than the approval process will be missed entirely.

The bottom line is that the general patient population ends up being guinea pigs sooner or later, and there is no avoiding it. It’s for this reason that Robin Hanson always advises his students to avoid the “cutting edge” medical treatments in favor of those that have been tested by time. Treatments that have been around for 50 or 100 years are much less likely to have undetected risks than treatments that are 20, 10, or 5 years old–or worst of all, brand new.

Every new treatment has a large, unknown downside risk of undetected side-effects. Moreover, every new treatment has a similarly large, unknown downside risk of interaction with other treatments already on the market. Even if the testing process turns out to have revealed every possible side-effect, it is literally impossible for it to have detected every possible interaction–consider that some interactions will end up being with treatments that didn’t exist at the time of testing!

What is there to Gain?

Taleb’s point isn’t sophistry. Consider the most famous case of undetected harm in the 20th Century–Thalidomide. I had known that after Thalidomide made it to market, it caused a rash of birth defects. What I hadn’t realized was that it was being used to treat morning sickness.

So in the best case scenario, the women taking Thalidomide would have had their nausea pass more quickly and be otherwise unchanged. But the worse case scenario was clearly unknown, as history proved. The question you have to ask yourself when you’re receiving some treatment today is whether what you’re being treated for is worth the risk of unwittingly stumbling upon the next Thalidomide.

If it’s something that our body is capable of dealing with on its own, Taleb’s advice is to forego treatment entirely. When the potential payoff is so small, errors on the part of the medical establishment will only hurt us.

This doesn’t mean that we should become anti-medicine. Instead, we should focus on extreme cases, and be willing to take more risks in those cases than our current regulatory and cultural environment allows. Taleb:

And there is a simple statistical reason that explains why we have not been able to find drugs that make us feel unconditionally better when we are well (or unconditionally stronger, etc.): nature would have been likely to find this magic pill by itself. But consider that illness is rare, and the more ill the person the less likely nature would have found the solution by itself, in an accelerating way. A condition that is, say, three units of deviation away from the norm is more than three hundred times rarer than normal; an illness that is five units of deviation from the norm is more than a million times rarer!

If we focus on those cases that were not likely to have emerged in a significant way during the process of natural selection that brought us to where we are today, we minimize the amount of downside risk from unforeseen side-effects that we’re exposing ourselves to, and we’re maximizing the potential gains of treatment.

Thus, the answer is not to increase regulation of the pharmaceutical industry or expand the FDA approval process. The latter is already so long that it allows lives to be lost while life-saving drugs take forever to come to market.

The Impulse to Intervene

The answer isn’t to just take what your doctor tells you at face value, either.

If 9 times out of 10, or 9.99 times out of 10, your doctor should tell you not to be treated in any manner, that is unfortunately not likely to be what you hear when you arrive for your appointment.

Doctors are simply more likely to want to do something rather than nothing. Consider the following, again from Taleb:

Consider this need to “do something” through an illustrative example. In the 1930s, 389 children were presented to New York City doctors; 174 of them were recommended tonsillectomies. The remaining 215 children were again presented to doctors, and 99 were said to need the surgery. When the remaining 116 children were presented to yet a third set of doctors, 52 were recommended the surgery. Note that there is morbidity in 2 to 4 percent of the cases (today, not then, as the risks of surgery were very bad at the time) and that a death occurs in about every 15,000 such operations and you get an idea about the break-even point between medical gains and detriment.

In other words, doctors were likely to advise a similar proportion of whatever group they were presented to get the surgery–despite the fact that other doctors had already lumped them into the group that didn’t need treatment!

Moreover, this problem is not confined to doctors in the 1930s. Consider how doctors and hospitals have responded to the scientific consensus that mammograms do not save lives on net.

For years now, doctors like myself have known that screening mammography doesn’t save lives, or else saves so few that the harms far outweigh the benefits. Neither I nor my colleagues have a crystal ball, and we are not smarter than others who have looked at this issue. We simply read the results of the many mammography trials that have been conducted over the years. But the trial results were unpopular and did not fit with a broadly accepted ideology—early detection—which has, ironically, failed (ovarian, prostate cancer) as often as it has succeeded (cervical cancer, perhaps colon cancer).

More bluntly, the trial results threatened a mammogram economy, a marketplace sustained by invasive therapies to vanquish microscopic clumps of questionable threat, and by an endless parade of procedures and pictures to investigate the falsely positive results that more than half of women endure. And inexplicably, since the publication of these trial results challenging the value of screening mammograms, hundreds of millions of public dollars have been dedicated to ensuring mammogram access, and the test has become a war cry for cancer advocacy. Why? Because experience deludes: radiologists diagnose, surgeons cut, pathologists examine, oncologists treat, and women survive.

In short, it is uncertain how deadly the cancers that mammograms detect early are, but it is certain that the invasive tactics required to combat such cancers put the patient at risk. The study that the above article begins with describes how the rise in mammograms has not resulted in a drop in the late-stage, definitely dangerous form of breast cancer.

There are any number of possible stories you can tell about why doctors will opt to do something rather than nothing, even when every intervention–needless or needed–carries the risk of iatrogenesis.

A Robin Hanson-style story (PDF) would go as follows: doctors are simply meeting a market demand. People are not really looking for what is medically best for them when they make an appointment, any more than consumers of news are trying to become more informed. What patients want is comfort–the comfort of someone who knows what they’re doing, taking charge of the decisions regarding our health. And few people take comfort in being told to do nothing–even if it’s the wisest choice. So the market produces doctors that satisfy the demand for comfort, rather than the demand for the best possible health outcomes.

The story subscribed to by Taleb and by the doctor quoted above is even more straightforward–more money is spent on intervention that non-intervention, so the incentives are clear. I’m not so sure about this one, as the doctors performing the diagnosis aren’t usually the ones who get paid for the procedure.

But the story doesn’t matter. The phenomenon of being too intervening too often is well documented, whatever the reason it occurs.

If what you’re interested in is your health, rather than comforting answers from a credentialed expert, then Taleb’s argument is worth considering. Do you really need to receive treatment for a bug that you’ll work through eventually, or for baldness, or for nausea that was always going to be temporary?

Why risk losing everything when you have so little to gain?

Another way to view it: the iatrogenics is in the patient, not in the treatment. If the patient is close to death, all speculative treatments should be encouraged— no holds barred. Conversely, if the patient is near healthy, then Mother Nature should be the doctor.