Unleash the Practitioners

Richard Dawkins is famously optimistic about human knowledge, especially within the confines of science. He is–understandably–allergic to the brand of postmodernist who believes that reality is simply a matter of interpretation, or cultural narrative. He has a much repeated one-liner that comes off as quite devastating–“There are no postmodernists at 30,000 feet.

It’s quite convincing. Engineers were able to make airplanes because of knowledge that was hard-won by the scientific community. The latter developed and tested theories, which the former could then put to use in order to get us moving about in the air at 30,000 feet. Right?

Wrong.

Historian Philip Scranton has done extensive work demonstrating that the original developers of the jet engine had no idea of the theory behind it, which was only developed after the fact. The jet engine was arrived at through tinkering and rote trial and error.

Dawkins was correct that there is a hard reality that is undeniable, and led to many failed prototypes. But the background story of science that he subscribes to is simply incorrect in this instance. Scientists didn’t develop theory which practitioners could apply; the practitioners invented something that scientists then felt the need to explain.

What’s amazing is how often this turns out to be the case, once you start digging.

Practitioners Elevated Us to New Heights

If there is one book that should be mandatory reading for every student of history, it is Deirdre McCloskey’s Bourgeois Dignity. It lays out in stark fashion just how little we know about what caused the enormous explosion in our standard of living that started over two hundred years ago. She systematically works through every attempted explanation and effectively eviscerates them. Issues of the day seem small when put in the perspective of a sixteen-fold growth in our standard of living (conservatively measured), and the utter inability of theorists to explain this phenomena is humbling.

For our purposes here we focus on Chapter 38: “The Cause Was Not Science”.

We must be careful when throwing around words like science, as it means many things to many people. What McCloskey is referring to is the stuff that generally gets grouped into the Scientific Revolution; the high theory traded by the Republic of Letters.

The jet engine example I mentioned earlier is exactly the sort of thing McCloskey has in mind. Take another example, from the book:

“Cheap steel,” for example, is not a scientific case in point. True, as Mokyr points out, it was only fully realized that steel is intermediate between cast and wrought iron in its carbon content early in the nineteenth century, since (after all) the very idea of an “element” such as carbon was ill-formed until then. Mokyr claims that without such scientific knowledge, “the advances in steelmaking are hard to imagine.” I think not. Tunzelmann notes that even in the late nineteenth century “breakthroughs such as that by Bessemer in steel were published in scientific journals but were largely the result of practical tinkering.”” My own early work on the iron and steel industry came to the same conclusion. Such an apparently straightforward matter as the chemistry of the blast furnace was not entirely understood until well into the twentieth century, and yet the costs of iron and steel had fallen and fallen for a century and a half.

This story plays out over and over again–the hard work of material progress is done by practitioners, but every assumes that credit belongs to the theorists.

It turns out that it isn’t even safe to make assumptions about those industries where theory seems, from the outside, to really dominate practice. What could be more driven by economic and financial theory than options trading? Surely this must be a case more in line with traditional understandings of the relationship between theory and practice.

And yet Nassim Taleb and Espen Gaarden Haug have documented how options traders do not use the output of theorists at all, but instead have a set of practices developed over time through trial and error.

Back to McCloskey:

The economic heft of the late-nineteenth-century innovations that did not depend at all on science (such as cheap steel) was great: mass-produced concrete, for example, then reinforced concrete (combined with that cheap steel); air brakes on trains, making mile-long trains possible (though the science-dependent telegraph was useful to keep them from running into each other); the improvements in engines to pull the trains; the military organization to maintain schedules (again so that the trains would not run into each other: it was a capital-saving organizational innovation, making doubletracking unnecessary); elevators to make possible the tall reinforced concrete buildings (although again science-based electric motors were better than having a steam engine in every building;  but the “science” in electric motors was hardly more than noting the connection in 1820 between electricity and magnetism-one didn t require Maxwell’s equations to make a dynamo); better “tin” cans (more electricity); asset markets in which risk could be assumed and shed; faster rolling mills; the linotype machine; cheap paper; and on and on and on. Mokyr agrees: “It seems likely that in the past 150 years the majority of important inventions, from steel converters to cancer chemotherapy, from food canning to aspartame, have been used long before people understood why they worked…. The proportion of such inventions is declining, but it remains high today.”

In 1900 the parts of the economy that used science to improve products and processes-electrical and chemical engineering, chiefly, and even these sometimes using science pretty crudely-were quite small, reckoned in value of output or numbers of employees. And yet in the technologically feverish U.K. in the eight decades (plus a year) from 1820 to 1900, real income per head grew by a factor of 2.63, and in the next eight “scientific” decades only a little faster, by a factor of 2.88. The result was a rise from 1820 to 1980 of a factor of (2.63) • (2.88) = 7.57. That is to say-since 2.63 is quite close to 2.88-nearly half of the world-making change down to 1980 was achieved before 1900, in effect before science. This is not to deny science its economic heft after science: the per capita factor of growth in the U.K. during the merely twenty years 1980 to 1999 was fully 1.53, which would correspond to an eighty-year factor of an astounding 5.5. The results are similar for the United States, though as one might expect at a still more feverish pace: a factor of 3.25 in per capita real income from 1820 to 1900, 4.54 from 1900 to 1980, and about the same frenzy of invention and innovation and clever business plans as Britain after 1980.

Note that McCloskey is not saying that science hasn’t made any contribution at all, or that the contribution is small. Taleb does not make that claim either. What is at issue here is that the contribution of science to our material well being is not just overblown, but overblown by several orders of magnitude. McCloskey ultimately concludes that “We would be enormously richer now than in 1700 even without science.”

Yet They Are Everywhere in Chains

Alex Tabarrok thinks to road to the innovation renaissance is through focusing such funding on STEM majors and tailoring our patent system so it only provides protection for industries like pharmaceuticals where it appears to make the biggest positive difference. Even Michele Boldrin and David Levines, who otherwise believe in abolishing intellectual property entirely, agree with Tabarrok’s exception. And Tyler Cowen believes that part of what we need to do in order to climb out of the Great Stagnation is elevate the status of science and scientists.

With respect to these distinguished gentlemen, I disagree. The road to greater prosperity lies in breaking the shackles we have increasingly put around practitioners, and elevating their work, and their status.

Whether or not the specific skills implied by a STEM career contribute to progress, it is quite clear that what is taught in the classroom is unlikely to be what is practiced in the field–since the teaching is done by teachers, who are not as a general rule practitioners. And let us return to Scranton, McCloskey, and Taleb: the vast majority of our material wealth came from tinkering that is decidedly non-STEM.

If you want to make progress in pharmaceuticals, don’t do it by enforcing (or worse, expanding) patents, which inhibit trial and error by those who do not hold the patent. Instead, remove the enormous impediments we have put up to experimentation. The FDA approval process imposes gigantic costs on drug development, including the cost of delaying when a drug comes to market and greatly reducing the number of drugs that can be developed. There is an entire agency whose sole purpose is to regulate medical trials.

It is all futile–as I have said before, in the end, the general market becomes the guinea pigs for many years after the drug is available, and no conceivable approval process can change that fact. But if you think differently–if you think theorists can identify what treatments are likely to succeed ahead of time, and are capable of designing experiments that will detect any serious side-effects, then our current setup makes a lot of sense.

But that is not the reality. Nassim Taleb argued in his latest book that we should avoid treating people who are mostly healthy, because of the possibility of unknown complications. On the other hand, we should take way more risks with people who are dangerously ill than our current system allows.

The trend is going the other way. Because we have made developing drugs so expensive, it is much more profitable to try to come up with the next advil, that will be used to ease symptoms of a mild disease but purchased by a very wide market, than a cure for rarer but more deadly diseases. And it doesn’t matter what they try to do, because the ultimate use of a drug is discovered through practice, not through theory. But it does matter, in the sense that we’re currently wasting many rounds of trial and error taking putting people at risk to attempt to make small gains.

Thalidomide remains the iconic example of how this works. It was marketed as an anti-nausea drug but caused birth defects when pregnant women took it. Yet it is widely used today, for treating far more serious  problems than nausea.

You Cannot Banish Risk

Aside from overestimating the abilities of theorists, the reason the discovery process of practitioners has been so hamstrung is because people are afraid of the errors inevitable in a process of trial and error. Thalidomide babies were an error, a horrible one. But there is no process, no theory that will allow us to avoid unforeseen mistakes. The only path to the drug that cures cancer or AIDS or malaria is one that involves people being hurt by unforeseen consequences. As Neal Stephenson put it, some people have to take a lot of risks in order to reduce the long run risk for all of us.

And along with the unforeseen harms, there are unforeseen gains as well. Penicillin, arguably the single greatest advancement in medicine in the 20th century, was an entirely serendipitous discovery.

I do not know if the stories of a great stagnation are accurate, but I agree with Peter Thiel that our regulatory hostility towards risk taking impoverishes us all, and allows many avoidable deaths every year.

The only way to start pushing the technological frontier again like we did at the peak of the Industrial Revolution is to empower the practitioners rather than impair them.

Unleash the practitioners and progress will follow.

Cultural Innovation — Putting Together the Pieces

My goal in 2012 is to write at least one paper and try to get it published. The paper I have in mind is inspired by three men, and their corresponding books. These are Friedrich Hayek and The Constitution of Liberty, Thomas Sowell and Knowledge and Decisions, and Everett Rogers and Diffusion of Innovations. I want to put the pieces together in order to make a single, solid argument, but I suspect I’m going to need a few more pieces before I can get there.

F. A. Hayek: Trial and Error and Local Knowledge

At any stage of this process there will always be many things we already know how to produce but which are still too expensive to provide for more than a few. And at an early stage they can be made only through an outlay of resources equal to many times the share of total income that, with an approximately equal distribution, would go to the few who could benefit from them. At first, a new good is commonly “the caprice of the chosen few before it becomes a public need and forms part of the necessities of life. For the luxuries of today are the necessities of tomorrow.” Furthermore, the new things will often become available to the greater part of the people only because for some time they have been the luxuries of the few.

-Friedrich Hayek, The Constitution of Liberty

Hayek argued that everything in human society–from technology to words to ideas to norms–begins its life as something developed and adopted by a small subset of the population. Some tiny fraction of these end up gaining mainstream adoption.

When I read The Constitution of Liberty two years ago, I became enamored by this very simple framework. It seemed an elegant explanation for how cultures evolve over time, through a process of rote trial and error.

On the other hand, I found the fact that Hayek didn’t elaborate on the process any further to be frustrating. If I had my way, I would throw out every last section of that book except the bits on cultural evolution, and have had him make up the other 400 some pages by digging deeper into this concept.

What Hayek is known for more widely is his work on local knowledge. In particular, “The Use of Knowledge in Society” discusses how the price system makes it possible for people to act on their specific knowledge of time and place without needing to get the much more difficult to acquire big-picture knowledge. Speaking of a hypothetical man on the spot, he wrote:

There is hardly anything that happens anywhere in the world that might not have an effect on the decision he ought to make. But he need not know of these events as such, nor of all their effects. It does not matter for him why at the particular moment more screws of one size than of another are wanted, why paper bags are more readily available than canvas bags, or why skilled labor, or particular machine tools, have for the moment become more difficult to obtain. All that is significant for him is how much more or less difficult to procure they have become compared with other things with which he is also concerned, or how much more or less urgently wanted are the alternative things he produces or uses. It is always a question of the relative importance of the particular things with which he is concerned, and the causes which alter their relative importance are of no interest to him beyond the effect on those concrete things of his own environment.

Hayek’s entire worldview was built around the idea of complex human systems which required more knowledge than any one individual within them could possibly have, something that Leonard Read captured more poetically in “I, Pencil“. The process of cultural evolution involved individuals and small groups trying out something new, which is observed by others who decide whether or not that new thing fits in with the particulars of their own circumstances, needs, and taste. In short, it doesn’t require much knowledge to come up with something new, and then an incremental amount of local knowledge is brought to bear as more individuals get exposed to that new thing.

But, as I said, he didn’t develop this system in any real detail.

Thomas Sowell: Knowledge Systems

The unifying theme of Knowledge and Decisions is that the specific mechanics of decision-making processes and institutions determine what kinds of knowledge can be brought to bear and with what effectiveness. In a world where people are preoccupied with arguing about what decision should be made on a sweeping range of issues, this book argues that the most fundamental question is not what decision to make but who is to make it–through what processes and under what incentives and constraints, and with what feedback mechanisms to correct the decision if it proves to be wrong.

-Thomas Sowell, Knowledge and Decisions

Sowell begins Knowledge and Decisions by explicitly recognizing his intellectual debt to Hayek in general and “The Use of Knowledge in Society” in particular. Yet in the book he goes far beyond any level of detail that Hayek provided on the subject, at least that I am aware of.

One of the crucial components of the book is the emphasis on feedback mechanisms.

[F]eedback mechanisms are crucial in a world where no given individual or manageably-sized group is likely to have sufficient knowledge to be consistently right the first time in their decisions. These feedback mechanisms must convey not only information but also incentives to act on that information, whether these incentives are provided by prices, love, fear, moral codes, or other factors which cause people to act in the interest of other people.

Clearly, feedback mechanisms must play a huge role in Hayek’s process of social trial and error. Feedback mechanisms are what determine what is considered “error” and force people to change course. As Sowell explains, they take many forms:

A minimal amount of information–the whimpering of a baby, for example–may be very effective in setting off a parental search for a cause, perhaps involving medical experts before it is over. On the other hand, a lucidly articulated set of complaints may be ignored by a dictator, and even armed uprisings against his policies crushed without any modification of those policies. The social use of knowledge is not primarily an intellectual process, or a baby’s whimpers could not be more effective than a well-articulated political statement.

He added “[f]eedback which can be safely ignored by decision makers is not socially effective knowledge.”

So discerning what outcomes we should expect from the various forms of social trial and error requires identifying the relevant feedback mechanisms. The feedback that potential new words faced takes a very different form than the feedback a new product on the market faces, or a publicly funded project.

The particulars of these feedback mechanisms, along with the incentives and institutional context, determine “what kinds of knowledge can be brought to bear and with what effectiveness” in each given case.

In many ways, Knowledge and Decisions is just good old-fashioned economics–it deals with incentives, with inherent trade-offs, and with scarcity. But it is a particularly Hayekian take on economics, with its focus on the scarcity of knowledge in particular and the role of very localized, difficult to communicate knowledge.

I don’t think Sowell gets nearly enough credit for this work among economists generally or even among Hayekians.

Everett Rogers: Curator of His Field

This book reflects a more critical stance than its original ancestor. During the past forty years or so, diffusion research has grown to be widely recognized, applied, and admired, but it has also been subjected to constructive and destructive criticism. This criticism is due in large part to the stereotyped and limited ways in which many diffusion scholars have defined the scope and method of their field of study. Once diffusion researchers formed an “invisible college” (defined as an informal network of researchers who form around an intellectual paradigm to study a common topic), they began to limit unnecessarily the ways in which they went about studying the diffusion of innovations. Such standardization of approaches constrains the intellectual progress of diffusion research.

Everett Rogers, Diffusion of Innovations, 5th Edition

After I read Constitution of Liberty, I realized that there was probably a literature behind the kind of phenomena that Hayek was talking about. The term “early adopter”, which has become part of the mainstream lexicon, must have come from somewhere. Hayek was unfortunately of little help; he cited old theorists like Gabriel Tarde. While the diffusion literature owed a certain intellectual debt to Tarde, he was writing nearly half a century before the modern field emerged.

I eventually happened upon Diffusion of Innovations, Everett Rogers’ book, the various editions of which basically bookend the entire history of the field in his lifetime. Which is quite helpful, because it began in his lifetime–and the first edition of the book was instrumental in its formation.

Where Hayek and Sowell’s works are within the confines of high theory, Diffusion of Innovations is a thoroughly empirical book, at times painstakingly so. There is not a single concept that Rogers introduces, no matter how simple, which he does not illustrate by summarizing a study or studies which involve an application of that concept.

Rogers helped formalize many of those concepts himself with the first edition of the book, published in 1962, when the literature was pretty sparse and dominated by rural sociologists. Since then, it has expanded across disciplines and in volume of published works. As a result, in the last edition of the book, published only a year before he died, there were many aspects of the diffusion process that had been solidly demonstrated by decades of work.

The books always served as a tool for both introducing the field to those unfamiliar with it, and attempting to steer future work. In the final edition, Rogers highlights not only what the literature has managed to illuminate, but its shortcomings. In short, the book has just about everything you would want if you were attempting to get a sense for what work has been done and what has been neglected.

There are aspects of the diffusion literature which are quite Hayekian. In particular, the emphasis on uncertainty and discovery processes.

One kind of uncertainty is generated by an innovation, defined as an idea, practice, or object that is perceived as new by an individual or another unit of adoption. An innovation presents an individual or an organization with a new alternative or alternatives, as well as new means of solving problems. However, the probability that the new idea is superior to previous practice is not initially known with certainty by individual problem solvers. Thus, individuals are motivated to seek further information about the innovation in order to cope with the uncertainty that it creates.

The various mechanisms which Rogers describes which individuals employ to reduce uncertainty–trying the innovation on a partial basis, or observing how it goes for peers who have adopted the innovation, or measuring the innovation against existing norms, to name a few–can be seen as clear cut cases of economizing on information.

In many ways the diffusion model that Rogers lays out is the detailed system that I wanted Hayek to develop. Rogers discusses so many specific aspects of the process; such as the role of heterogeneity and homogeneity, people who are more cosmopolitan or more localite, the different categories of adopters–including the familiar early adopters–and on and on. Rogers concisely describes and categorizes the various feedback mechanisms against adoption in the system.

On the other hand, the beginning of the process–the actual generation of the innovation–is where the literature is by far the weakest. Rogers cites several who have criticized it for this, and agrees that it is a problem. He points out several attempts that have been made to address this problem, but it’s clear that not nearly as much work has been done nor are the results as solid.

Part of the problem is the historical origins of the field–the diffusion literature began with rural sociology, where innovations were developed in universities who then peddled their wares to American farmers. The single most influential study dealt with the diffusion of hybrid corn, which seemed very clearly to be a quantifiable improvement over its alternatives. As such, many diffusion studies have the perspective of assuming that an innovation should diffuse, that there is some problem with the people who reject rather than adopt.

How did the pro-innovation bias become part of diffusion research? One reason is historical: hybrid corn was very profitable for each of the Iowa farmers in the Ryan and Gross (1943) study. Most other innovations that have been studied do not have this extremely high degree of relative advantage. Many individuals, for their own good, should not adopt many of the innovations that are diffused to them. Perhaps if the field of diffusion research had not begun with a highly profitable agricultural innovation in the 1940s, the pro-innovation bias would have been avoided or at least recognized and dealt with properly.

Moreover, the outline of what he believes is the process by which innovations are generated is a very directed, top-down process. It involves “change agents” that are consciously attempting to solve problems and diffuse some innovations. I’m not arguing against the existence of such agents–they are obviously an extensive part of society, from medical researchers seeking a cure for cancer and pharmaceutical companies attempting to get their drugs mainstream adoption, to Apple coming up with a completely different kind of smartphone and tablet and bringing them to market.

But the change agents, as Rogers and the diffusion literature envision them, are only a part of Hayek’s story of social trial and error. Consider language–new words and phrases emerge all the time and diffuse through a process which I am certain is identical to the one Rogers describes. On the other hand, I highly doubt that there are “change agents” who developed these new words and phrases in a lab somewhere and then promoted them. I think the process is far more organic.

Rogers also discusses the role of norms in terms of how they hinder or help the diffusion of an innovation, but left unsaid I think is that those norms are themselves undoubtedly the product of a previous diffusion. In Hayek and Sowell’s framework, traditions and existing norms emerged in response to trade-offs that needed to be made throughout a culture’s history. As Edmund Burke put it succinctly in Reflections on the Revolution in France:

We are afraid to put men to live and trade each on his own private stock of reason; because we suspect that this stock in each man is small, and that the individuals would do better to avail themselves of the general bank and capital of nations, and of ages.

The trial and error process that Hayek envisioned built up that “general bank and capital of nations, and of ages” as societies developed increasingly effective ways to manage their trade-offs.

Rogers does touch on this point of view from a couple of angles. First, he describes the work of Stephen Lansing in uncovering the astonishing effectiveness of the local knowledge contained in the religious hierarchy of Bali, as he described in his book Priests and Programmers. This was a case where the seemingly beneficial innovations of the Green Revolution proved inferior to what seemed like mere superstitious practice.

The Balinese ecological system is so complex because the Jero Gde must seek an optimum balance of various competing forces. If all subaks were planted at the same time, pests would be reduced; however, water supplies would be inadequate due to peaks in demand. On the other hand, if all subaks staggered their rice-planting schedule in a completely random manner, the water demand would be spread out. The water supply would be utilized efficiently, but the pests would flourish and wipe out the rice crop. So the Jero Gde must seek an optimal balance between pest control and water conservation, depending on the amount of rainfall flowing into the crater lake, the levels of the different pest populations in various subaks, and so forth.

When the Green Revolution innovations were introduced to the region, crop yields dropped, rather than increased. This intrigued Lansing.

In the late 1980s, Lansing, with the help of an ecological biologist, designed a computer simulation to calculate the effect on rice yields in each subak of (1) rainfall, (2) planting schedules, and (3) pest proliferation. He called his simulation model “The Goddess and the Computer.” Then he traveled with a Macintosh computer and the simulation model from his U.S. university campus to the Balinese high priest at the temple on the crater lake. The Jero Gde enthusiastically tried out various scenarios on the computer, concluding that the highest rice yields closely resembled the ecological strategies followed by the Balinese rice farmers for the past eight hundred years.

Clearly, Balinese society had arrived at this optimal solution through some process. But Rogers does not delve too deeply into this.

Rogers also acknowledges that the literature may have focused too exclusively on more centralized processes.

In recent decades, the author gradually became aware of diffusion systems that did not operate at all like centralized diffusion systems. Instead of coming out of formal R&D systems, innovations often bubbled up from the operational levels of a system, with the inventing done by certain lead users. Then the new ideas spread horizontally via peer networks, with a high degree of re-invention occurring as the innovations are modified by users to fit their particular conditions. Such decentralized diffusion systems are usually not managed by technical experts. Instead, decision making in the diffusion system is widely shared, with adopters making many decisions. In many cases, adopters served as their own change agents in diffusing their innovations to others.

Though recognizing that such processes exist, it’s clear that the work that has been done on this is much thinner than the more traditional, change agent based research.

Questions That Remain

As I said, all three of these pieces have some holes in them, and those holes aren’t necessarily filled just by putting all of them together.

The next logical step would probably be to seek out more material like Rogers’, where a lot of work has been done and concrete conclusions can be drawn. Any work on how new words and phrases emerge and proliferate would probably be a good start.

Online communities also have many customs, such as hashtags on Twitter and the hat tip among bloggers. The advantage to customs like this is that they leave behind recorded evidence, unlike, say, an oral tradition. We know, for instance, when hashtags first became popularized among Twitter users–it is documented. A great deal of work is being done by communications scholars on subjects such as these; this could also probably provide some more solid leads.

What I want to argue is that innovations are generated in a Hayekian trial and error process, and some subset of them gain mass adoption in the manner described by the diffusion of innovations literature. I want to describe the role that local knowledge plays in that process; how the feedback mechanisms and incentives shape what innovations are generated and which ones ultimately are adopted.

But there’s more research to be done before I can make a case for this thesis that is solid enough for me to be comfortable with.

Innovation will Bubble Up from the Long Tail

I recently wrote that the long tail of digital content producers–that is, the vast majority–will make nearly nothing in revenue. This is especially true when compared to the head of the tail, the tiny fraction of content producers that will earn the vast majority of the revenue. By this I did not mean that the long tail was unimportant–in fact, I believe that the long tail is the most important segment, because that is where the future can be found.

Social Trial and Error

Societies progress through continual and parallel processes of trial and error. Small groups adopt products or activities or norms, a subset of which are picked up by larger groups, and even smaller subset of which goes on to yet a larger group. This process continues until only a tiny fraction of the original products, activities, or norms go mainstream.

An enormous number of trials end up discarded before a single one makes it to even a middle level of adoption, much less the favored few that go mainstream–or stay there long.

This phenomena, well documented in the diffusion of innovations literature, is most familiar to people in the fast moving world of consumer technology, where phrases like “early adopters” are used in casual conversation. It applies to anything that could proliferate across groups–for example, art, and content more generally.

A Hotbed of New Ideas and Failure

An unknown, aspiring writer in today’s world faces the same problem as any unknown, aspiring writer did in generations past–obscurity. He has many more tools at his disposal than his equivalents in the past did–he can start a blog, podcast, and connect with others on social networks to promote his work. There are also many, many more places he can submit his work–there are still magazines in the traditional sense, but there are many more online outlets of widely varying audiences.

These tools are available to any aspiring writer, however–in fact, the barriers to putting out your writing in public are so low that huge swaths of people who wouldn’t have even tried in the past are also putting their stuff out there. If anything, the web and the new opportunities it affords have actually reduced the probability that any one aspiring author will make it big.

If they want to set themselves apart, they will have to innovate. Of course, as touched on above, most of these innovations will fail to gain any traction. The new and exciting things happening in writing, however, will come from the successful subset of these innovations.

Scott Sigler is an example of a successful innovator in digital writing. After losing his book deal ten years ago, he learned about podcasting and decided to record and serialize his book himself, and put it out for free. He continued to do this after the first book, and eventually had built up a big enough audience to catch the attention of Dragon Moon Press, a small independent publisher. On the strength of his online following–who helped not only with sales but with marketing the book–the book managed to rocket up Amazon’s bestseller list. This caught the attention of Random House, with whom he currently has a contract. His second book with them was a New York Times bestseller. He isn’t selling Harry Potter-level blockbusters, but he has definitely moved up out of the long tail and into the head.

Sigler wasn’t the sole creator of the podcast novel; others were trying it out at basically the same time. But Kevin Kelly has documented how innovations and ideas tend to occur in parallel in art, as well as in science, math, and technology.

The podcast novel is a great example of how this dynamic works, too, because while it helped launch the careers of Sigler and some of his peers, the form itself has yet to become anything like mainstream. It has grown over time, but Podiobooks.com, one of the largest repositories for podcast novels on the web, still boasts a mere 569 titles, and a registered audience of 83,000. If you asked a random individual, even a random book enthusiast, odds are extremely low that they would have even heard of podcast novels.

Most innovations never make it even this far–but there is still no guarantee that podcast novels will get to the level of mainstream adoption, or even mainstream awareness.

The Head of the Tail is Conservative

The big record labels, publishing houses, and movie studios will never try something truly new. I am confident on this point–anything that is celebrated as being new from these big institutions will in fact just be the first time that big money has been spent on a form that was tried out first by individuals in the long tail.

It makes sense–a lone writer, musician, or filmmaker is working with a very small budget. In a writer’s case, they may pay next to nothing and face an opportunity cost made up primarily of their time. If they try something new and different and it fails, they may be out a few months of work. A big publishing house, on the other hand, has to pay the salaries of its army of editors, not to mention the costs of promoting a work. In dollar terms at least, failure hurts a publishing house a lot more than it hurts the lone, unknown author.

And publishing houses still fail more often than they succeed. They just win really big when they do win, and that subsidizes the failures. Profitably depends on their ability to increase the fraction of the authors they sign on who end up being successes, and minimizing the failures.

For that reason, they are always going to stick to the tried and true. Innovations will have to gain widespread adoption in the long tail–and for a while–before they bubble up to the head.

Consider the movement towards ebooks. The formats that Amazon, Barnes and Noble, and Apple are providing consumers are essentially nothing more than the digitization of the print versions. They do not offer the increased capabilities that digital technology makes possible–of including video and audio files mixed in with the text, for example, something commonly done on blogs. They optimize for the tried and true, because all the money is being invested in the tried and true.

Mechanisms exist for making money from innovations–you could pitch an idea beforehand on Kickstarter, or make an app for smartphones and tablets and charge a price for it. Only after a respectable amount of money has been made by innovators will the institutions at the head of the tail start to take notice.

So while I don’t agree with Chris Anderson’s original hypothesis that the long tail will be of increasing monetary significance to businesses, I do think that it will be an even greater engine of innovation in the digital era than it was in the analog one.