My First Year With the Kindle

I am not a music person. Don’t get me wrong, I enjoy listening to music. But my taste in music has never been an important part of my identity, and listening to music has never really been something I devoted a whole lot of time to. For that reason, the iTunes revolution didn’t impact my life quite as dramatically as it did for some people I know, for whom music is a very crucial part of their lives.

What do matter to me are books. I read a lot of books, and always have. However, the digital revolution in books lagged way behind the one in music. We can’t know the reason for this, but there’s one story that intuitively makes sense to me. By the time the web was born, CDs were already the primary way we were getting our music, and CDs were a digital format. It was trivially easy to rip songs from those CDs to our computers, which made piracy just as trivially easy once people started going online in large numbers. This created pressure to create legitimate, low-cost alternatives to Napster. In the publishing industry, however, we were still working with essentially the same “analog” product that humanity has known since Gutenberg; a physical, printed book. It takes a big time commitment to scan books page by page to turn them into something digital.

Amazon had built its empire on book sales, and despite the fact that they had started selling just about everything else under the sun, they weren’t about the rest on their laurels. Jeff Bezos knew the digital disruption would be coming to books eventually, and he wanted to own it rather than have Apple or someone else come in and dominate the future of a category that had been Amazon’s bread and butter. Learning the lesson of the iPod, he would offer a device with a tightly integrated content ecosystem. In 2007 he announced the Kindle, which was just such a device.

The Kindle basically created the market for ebooks, and has dominated that market as a result. Barnes and Noble’s Nook is in a distant second, and Apple’s iBook store has less than half of Barnes and Noble’s market share (source).

The Kindle and Me

Despite my love of reading, I waited for years before I took the plunge. In 2009, after Amazon deleted people’s copies of Orwell’s 1984, I thought I might never trust them enough to buy into their ecosystem. However, after the PR firestorm that rained down on them after that, I’m confident they wouldn’t pull it again–as the big fish in the ebook pond they are being scrutinized very closely, so they’re unlikely to be able to accomplish it by stealth, either.

2009 was the first year where I was really tempted, too, as I moved out of my parents’ place and into an apartment, which meant moving my books. I left behind the majority of my books, but the ones I took still amounted to a ton of boxes. Knowing that this was not going to be my last move, I started wondering whether physical books were worth the hassle.

I wasn’t really pushed over the edge until last year. I can pinpoint a single event that did it–the publication of Tyler Cowen’s The Great Stagnation. It wasn’t just that a brilliant economist at the school I got my MA from was publishing a purely digital book. It was also that just about everyone in the economics blogosphere was talking about it. It was a $4 digital book that kicked off a fascinating debate and, really, set many of the parameters of the discussion around our current economic predicament. After watching this unfold, I couldn’t help myself–I really, really wanted a Kindle.

I asked for it for my birthday, which was three months later. Now, almost a year later, I currently have 62 items on my Kindle. I have read a ton, to put it mildly.

To put it plainly, I love my Kindle, and I love the Amazon ecosystem. The device itself is much lighter to hold than a book is. You don’t have to worry about holding it open or turning pages, so you can hold it with one hand. The fact that it isn’t a fully-featured computing device is definitely a plus in terms of avoiding distractions. I can also read my books on just about any computer–from my iPhone to my laptop. If I forget my Kindle at home, I can still continue whatever book I was reading by logging into my Amazon account and reading it in the browser.

2011 turned out to be a year where a lot of Great Stagnation-style, straight to digital short, cheap books came out. The other one that drew a similar amount of attention was Erik Brynjolfsson and Andrew McAfee’s Race Against the Machine, but there were also Ryan Avent’s The Gated City and Alex Tabarrok’s Launching The Innovation Renaissance. I read blogs by all of these people, and when they announced their books, it was just too easy to follow the Amazon link and click “Buy now with 1-Click”. As my friend James Long eloquently put it, “any interesting Kindle book which costs less than $5 feels free to me because of a floating point error in my internal processor.”

Similarly, when bloggers or people on Facebook or Twitter that I trust recommend a book that is less than $2, it is hard to resist snapping it up. Authors and publishers are clearly learning to take advantage of this–I recently read Child of Fire, a fantasy novel that goes for $0.99. The price of the next book in the series jumps up to $5.99, and the third one goes up to $7.99.

To legacy publishers, ebooks may seem like a mixed bag as they threaten their margins. To authors who haven’t been able to make it big in the old system, however, there are new opportunities. Take Tim Pratt, whose Marla Mason series was cancelled before it was finished due to lack of sales. So he serialized the book on the open web and sold the complete version directly through the Kindle store. He was among the first to call my attention to the growth that indie authors with a strong fan following were seeing for the Kindle titles when he tweeted the following:

[blackbirdpie id=”28825354055″]

I’ve talked about how Scott Sigler has taken advantage of the ebook scene–I actually just finished reading Nocturnal on my Kindle before I started writing this post (though that one came out through his publisher). And Amanda Hocking captured everyone’s imaginations last year when it became clear that she was making some serious money off of her Kindle sales.

One thing I don’t like about Kindle books, compared to print books or just open, non-proprietary digital standards, is that my friends can’t borrow my books. With a print book, if I love it and have a friend I think would too, I can just hand it to them. While a subset of Kindle books have “lending” enabled, where you can let one other Kindle user read it through their account, most Kindle books do not.

However, Kindles keep getting cheaper and cheaper. The cheapest one right now is $79, and I will bet good money that we’ll eventually see a $20 one. In a world with $20 Kindles, having a secondary one that you lend to your friends seems a lot more plausible.

And one thing I love about Amazon as a company is how relentlessly they push down prices. One of the books on my Kindle right now is an item from their Lending Library. If you have Prime membership and a Kindle device, you can get one book a month for free from a subset of their catalog that are part of the program. I just read the entire Hunger Games series this way, without paying a penny.

In short, I have really enjoyed my first year in the Kindle ecosystem. It’s really a very exciting time for anyone who loves to read.

The Value of Intellectual Products

Ultimately it comes down to common sense. When you’re abusing the legal system by trying to use mass lawsuits against randomly chosen people as a form of exemplary punishment, or lobbying for laws that would break the Internet if they passed, that’s ipso facto evidence you’re using a definition of property that doesn’t work.

-Paul Graham, Defining Property

According to the RIAA, our current failure to enforce copyright has cost us $12.5 billion per year in economic losses, and over 70,000 jobs a year. The message: piracy makes us poorer and leaves tens of thousands of people per year unemployed.

I don’t think I can do a better job of responding to that sort of estimate than Rob Reid did recently with his short Ted Talk.

Putting aside the methodologically questionable cost calculations, some have argued that unless a mechanism exists for rewarding creators whose creations we value, we won’t get those creations at all–or at least, not very many of them.

I believe that this is the only question that matters.

Priorities

I don’t care how many jobs are created or how much revenue the IP rights-holders make in a given year, or even if they make any revenue at all. All that matters is how much value ends up being created for consumers.

When I put out this idea on Twitter, Eli disagreed.

[blackbirdpie id=”181731409411575808″]

A discussion ensued.

[blackbirdpie id=”181735076042768384″]

[blackbirdpie id=”181735238534303746″]

[blackbirdpie id=”181735409171169280″]

[blackbirdpie id=”181735692945199104″]

[blackbirdpie id=”181736182311436289″]

[blackbirdpie id=”181736382627196929″]

[blackbirdpie id=”181737477655109633″]

And the answer, in my opinion, is that we want them to do that in a way that maximizes the value to consumers in the long run. Along the way, this will inevitably provide value to producers, but that value will be fleeting, as we find increasingly efficient ways to do what they do. The important thing is to consider the value provided to consumers, for as Bastiat said, “the interests of the consumer are the interests of the human race.”

In our role as consumers, it is in our interest for everything to be plentiful. In our role as producers, it is in our interest for whatever we produce to be scarce. Scarcity is poverty, abundance is wealth.

In short, our priority should be to arrive at the arrangement that provides consumers with the maximum amount of value from intellectual products. If we could provide exponentially more value to consumers from intellectual products but their producers were unable to make a dime, that would be a net improvement. That’s not the situation, but I think it’s clarifying to keep that extreme scenario in mind.

Trade-Offs

To us it seems pretty obvious that people always want to treat it as a pricing issue, that people are doing this because they can get it for free and so we just need to create these draconian DRM systems or ani-piracy systems, and that just really doesn’t match up with the data. If you do a good job of providing a great service giving people… as a customer I want to be able to access my stuff wherever I am, and if you put in place a system that makes me wonder if I’ll be able to get it then you’ve significantly decreased the value of it.

Gabe Newell, co-founder of Valve

What does not help is when consumers are pushed into something by reducing their options elsewhere. When Americans pay a premium above the global price for steel or sugar, it isn’t because they value those commodities that much more than their international counterparts; it is because policy has restricted the number of alternatives available to them. This approach destroys wealth rather than creates it, leaving only a handful of producers better off at our expense.

Every policy that the big content lobbies have pursued have been of this nature. From the copyright extensions to bills like SOPA; they have sought to extract more value for themselves individually by shrinking the pie for the rest of us. All while telling us that it’s for our own good.

Their efforts always come at the expense of the honest customer who isn’t trying to game the system. Meanwhile, they have done nothing to curb piracy, which remains trivially easy.

Eli recently wrote out a thought experiment where we were in a world in which enforcing laws against murder became as hard as it currently is to enforce intellectual property laws. His argument:

Suppose a new technology were introduced that made it easy to get away with murder (e.g., David Friedman’s plan for Murder Incorporated). This technology makes it extremely costly, though, say, not impossible, to stop murders from occurring. What happens to the optimal amount of murder enforcement? The amount that must be spent to deter each murder has gone up, so the price of deterrence has gone up. Consequently, society should aim to deter fewer murders.

I was a little skeptical of where he was coming from…

[blackbirdpie id=”159725956800581634″]

But he elaborated that what he was trying to say was that “if even laws against murder should be sensitive to enforcement costs, then it makes sense to also make copyright law sensitive to enforcement costs.”

Here’s what makes sense to an economist but sounds horrifying to most non-economists:

If we are looking at Eli’s scenario from the point of view of the objective goal of maximizing the number of lives we save, then logically, we must accept his conclusion.

Say his murder technology made it so that it would take $80 billion in order to save a single person from being murdered. Say, in theory, that we could have spent that $80 billion on buying enough penicillin to save 8,000 people’s lives. Clearly, it makes more sense to spend it on those vaccines–if saving lives is really your goal.

All of this is to say that there are always trade-offs, and while the goal you have in mind may itself be subjectively chosen, the trade-offs themselves are absolutely objective. We can’t always know what they are, but they’re not something you can just wish away.

And really, that’s what the IP lobby’s strategy so far has come down to–trying to wish digital technology and the Internet away. But they are not going anywhere.

The New Balance

There is going to be more piracy than there used to be. That is the new reality that we are all going to have to live with.

Fortunately for us, we’re not living in Eli’s nightmare murder scenario–this technological shift brings with it benefits as well as challenges. In fact, I would argue that the benefits dramatically exceed the downside, from the perspective of value being made available. For example, you get a kind of “production as consumption” that occurs–people who like to write or illustrate or take pictures and share them online with an audience of uncertain but probably limited size. This very post is an example of that. I value the ability to share my writing with others, however few they may be in number.

More to the point, you get near costless replication of digital content, and near costless distribution around the world. Has the rhetoric around IP protection reached a point where I really need to actively argue that those are very good things?

Alarmism aside, the big IP rights-holders are not exactly hurting for money. Services like iTunes, Cable TV, and Netflix are making money hand over fist for them. Apple has paid billions to developers of apps alone–who are also producers of intellectual products. Then there are services like Kickstarter which allow consumers to pitch in for the up-front costs of creating intellectual products, and services like Paypal which make it easy to donate.

So the mechanisms to reward creators in the new digital landscape already exist, and new ones are being built all the time. Consumers have demonstrated that they are willing to open up their pocketbooks and use those mechanisms when it’s for something that they genuinely value.

So how should we be rethinking intellectual property enforcement, and intellectual property law itself, moving forward? I turn again to Eli:

In fact, the cost of deterrence has increased so much that we should begin to rethink copyright law. We could increase the benefits of deterrence if we targeted only high-value infringements. This means that we should shorten the term of copyright, since high-value IP tends to be newer IP (in fact, copyright terms have increased in recent decades, a move in the wrong direction). We might consider expanding “fair use” copyright exemptions to include more non-commercial uses, since commercial infringements are more likely to diminish the value of a copyright. Most importantly, we should withdraw public resources from the enforcement of IP violations. Private enforcement through the tort system has a built-in safety valve: when the cost of enforcement rises, people will do less of it. But the criminal system is essentially a public subsidy for enforcement; no wonder that pro-copyright factions are attempting to criminalize copyright infringement through SOPA and other legislation.

The bottom line is that recent expansions of copyright terms and enforcement powers get the comparative statics exactly backwards. In an age of costly enforcement, it’s time to give up, at least at the margin, on copyright. And at the margin, content creators should just be more polite to content consumers.

We need to loosen, but not eliminate, IP law and IP enforcement.

In the long tail, creators are already doing what needs to be done–focusing on creating value rather than on fighting against technological destiny. Witness Scott Sigler, who has built up enough of a fan base to make a living doing what he loves. Or the numerous webcomic artists who have managed to support themselves while still giving away their primary product for free.

I think that eventually, the big IP rights-holders will adapt to the latest technological shift just as they adapted to VHS, cassettes, and vinyl records in the past. For instance, big publishers are starting to reduce the uncertainty of their investments by waiting for people to make it big online before drafting them to the major leagues, so to speak.

We are simply in a transitional moment. Eventually our institutional arrangements will make full use of the advantages that digital technology and the Internet provide for creating value.

Cultural Innovation — Putting Together the Pieces

My goal in 2012 is to write at least one paper and try to get it published. The paper I have in mind is inspired by three men, and their corresponding books. These are Friedrich Hayek and The Constitution of Liberty, Thomas Sowell and Knowledge and Decisions, and Everett Rogers and Diffusion of Innovations. I want to put the pieces together in order to make a single, solid argument, but I suspect I’m going to need a few more pieces before I can get there.

F. A. Hayek: Trial and Error and Local Knowledge

At any stage of this process there will always be many things we already know how to produce but which are still too expensive to provide for more than a few. And at an early stage they can be made only through an outlay of resources equal to many times the share of total income that, with an approximately equal distribution, would go to the few who could benefit from them. At first, a new good is commonly “the caprice of the chosen few before it becomes a public need and forms part of the necessities of life. For the luxuries of today are the necessities of tomorrow.” Furthermore, the new things will often become available to the greater part of the people only because for some time they have been the luxuries of the few.

-Friedrich Hayek, The Constitution of Liberty

Hayek argued that everything in human society–from technology to words to ideas to norms–begins its life as something developed and adopted by a small subset of the population. Some tiny fraction of these end up gaining mainstream adoption.

When I read The Constitution of Liberty two years ago, I became enamored by this very simple framework. It seemed an elegant explanation for how cultures evolve over time, through a process of rote trial and error.

On the other hand, I found the fact that Hayek didn’t elaborate on the process any further to be frustrating. If I had my way, I would throw out every last section of that book except the bits on cultural evolution, and have had him make up the other 400 some pages by digging deeper into this concept.

What Hayek is known for more widely is his work on local knowledge. In particular, “The Use of Knowledge in Society” discusses how the price system makes it possible for people to act on their specific knowledge of time and place without needing to get the much more difficult to acquire big-picture knowledge. Speaking of a hypothetical man on the spot, he wrote:

There is hardly anything that happens anywhere in the world that might not have an effect on the decision he ought to make. But he need not know of these events as such, nor of all their effects. It does not matter for him why at the particular moment more screws of one size than of another are wanted, why paper bags are more readily available than canvas bags, or why skilled labor, or particular machine tools, have for the moment become more difficult to obtain. All that is significant for him is how much more or less difficult to procure they have become compared with other things with which he is also concerned, or how much more or less urgently wanted are the alternative things he produces or uses. It is always a question of the relative importance of the particular things with which he is concerned, and the causes which alter their relative importance are of no interest to him beyond the effect on those concrete things of his own environment.

Hayek’s entire worldview was built around the idea of complex human systems which required more knowledge than any one individual within them could possibly have, something that Leonard Read captured more poetically in “I, Pencil“. The process of cultural evolution involved individuals and small groups trying out something new, which is observed by others who decide whether or not that new thing fits in with the particulars of their own circumstances, needs, and taste. In short, it doesn’t require much knowledge to come up with something new, and then an incremental amount of local knowledge is brought to bear as more individuals get exposed to that new thing.

But, as I said, he didn’t develop this system in any real detail.

Thomas Sowell: Knowledge Systems

The unifying theme of Knowledge and Decisions is that the specific mechanics of decision-making processes and institutions determine what kinds of knowledge can be brought to bear and with what effectiveness. In a world where people are preoccupied with arguing about what decision should be made on a sweeping range of issues, this book argues that the most fundamental question is not what decision to make but who is to make it–through what processes and under what incentives and constraints, and with what feedback mechanisms to correct the decision if it proves to be wrong.

-Thomas Sowell, Knowledge and Decisions

Sowell begins Knowledge and Decisions by explicitly recognizing his intellectual debt to Hayek in general and “The Use of Knowledge in Society” in particular. Yet in the book he goes far beyond any level of detail that Hayek provided on the subject, at least that I am aware of.

One of the crucial components of the book is the emphasis on feedback mechanisms.

[F]eedback mechanisms are crucial in a world where no given individual or manageably-sized group is likely to have sufficient knowledge to be consistently right the first time in their decisions. These feedback mechanisms must convey not only information but also incentives to act on that information, whether these incentives are provided by prices, love, fear, moral codes, or other factors which cause people to act in the interest of other people.

Clearly, feedback mechanisms must play a huge role in Hayek’s process of social trial and error. Feedback mechanisms are what determine what is considered “error” and force people to change course. As Sowell explains, they take many forms:

A minimal amount of information–the whimpering of a baby, for example–may be very effective in setting off a parental search for a cause, perhaps involving medical experts before it is over. On the other hand, a lucidly articulated set of complaints may be ignored by a dictator, and even armed uprisings against his policies crushed without any modification of those policies. The social use of knowledge is not primarily an intellectual process, or a baby’s whimpers could not be more effective than a well-articulated political statement.

He added “[f]eedback which can be safely ignored by decision makers is not socially effective knowledge.”

So discerning what outcomes we should expect from the various forms of social trial and error requires identifying the relevant feedback mechanisms. The feedback that potential new words faced takes a very different form than the feedback a new product on the market faces, or a publicly funded project.

The particulars of these feedback mechanisms, along with the incentives and institutional context, determine “what kinds of knowledge can be brought to bear and with what effectiveness” in each given case.

In many ways, Knowledge and Decisions is just good old-fashioned economics–it deals with incentives, with inherent trade-offs, and with scarcity. But it is a particularly Hayekian take on economics, with its focus on the scarcity of knowledge in particular and the role of very localized, difficult to communicate knowledge.

I don’t think Sowell gets nearly enough credit for this work among economists generally or even among Hayekians.

Everett Rogers: Curator of His Field

This book reflects a more critical stance than its original ancestor. During the past forty years or so, diffusion research has grown to be widely recognized, applied, and admired, but it has also been subjected to constructive and destructive criticism. This criticism is due in large part to the stereotyped and limited ways in which many diffusion scholars have defined the scope and method of their field of study. Once diffusion researchers formed an “invisible college” (defined as an informal network of researchers who form around an intellectual paradigm to study a common topic), they began to limit unnecessarily the ways in which they went about studying the diffusion of innovations. Such standardization of approaches constrains the intellectual progress of diffusion research.

Everett Rogers, Diffusion of Innovations, 5th Edition

After I read Constitution of Liberty, I realized that there was probably a literature behind the kind of phenomena that Hayek was talking about. The term “early adopter”, which has become part of the mainstream lexicon, must have come from somewhere. Hayek was unfortunately of little help; he cited old theorists like Gabriel Tarde. While the diffusion literature owed a certain intellectual debt to Tarde, he was writing nearly half a century before the modern field emerged.

I eventually happened upon Diffusion of Innovations, Everett Rogers’ book, the various editions of which basically bookend the entire history of the field in his lifetime. Which is quite helpful, because it began in his lifetime–and the first edition of the book was instrumental in its formation.

Where Hayek and Sowell’s works are within the confines of high theory, Diffusion of Innovations is a thoroughly empirical book, at times painstakingly so. There is not a single concept that Rogers introduces, no matter how simple, which he does not illustrate by summarizing a study or studies which involve an application of that concept.

Rogers helped formalize many of those concepts himself with the first edition of the book, published in 1962, when the literature was pretty sparse and dominated by rural sociologists. Since then, it has expanded across disciplines and in volume of published works. As a result, in the last edition of the book, published only a year before he died, there were many aspects of the diffusion process that had been solidly demonstrated by decades of work.

The books always served as a tool for both introducing the field to those unfamiliar with it, and attempting to steer future work. In the final edition, Rogers highlights not only what the literature has managed to illuminate, but its shortcomings. In short, the book has just about everything you would want if you were attempting to get a sense for what work has been done and what has been neglected.

There are aspects of the diffusion literature which are quite Hayekian. In particular, the emphasis on uncertainty and discovery processes.

One kind of uncertainty is generated by an innovation, defined as an idea, practice, or object that is perceived as new by an individual or another unit of adoption. An innovation presents an individual or an organization with a new alternative or alternatives, as well as new means of solving problems. However, the probability that the new idea is superior to previous practice is not initially known with certainty by individual problem solvers. Thus, individuals are motivated to seek further information about the innovation in order to cope with the uncertainty that it creates.

The various mechanisms which Rogers describes which individuals employ to reduce uncertainty–trying the innovation on a partial basis, or observing how it goes for peers who have adopted the innovation, or measuring the innovation against existing norms, to name a few–can be seen as clear cut cases of economizing on information.

In many ways the diffusion model that Rogers lays out is the detailed system that I wanted Hayek to develop. Rogers discusses so many specific aspects of the process; such as the role of heterogeneity and homogeneity, people who are more cosmopolitan or more localite, the different categories of adopters–including the familiar early adopters–and on and on. Rogers concisely describes and categorizes the various feedback mechanisms against adoption in the system.

On the other hand, the beginning of the process–the actual generation of the innovation–is where the literature is by far the weakest. Rogers cites several who have criticized it for this, and agrees that it is a problem. He points out several attempts that have been made to address this problem, but it’s clear that not nearly as much work has been done nor are the results as solid.

Part of the problem is the historical origins of the field–the diffusion literature began with rural sociology, where innovations were developed in universities who then peddled their wares to American farmers. The single most influential study dealt with the diffusion of hybrid corn, which seemed very clearly to be a quantifiable improvement over its alternatives. As such, many diffusion studies have the perspective of assuming that an innovation should diffuse, that there is some problem with the people who reject rather than adopt.

How did the pro-innovation bias become part of diffusion research? One reason is historical: hybrid corn was very profitable for each of the Iowa farmers in the Ryan and Gross (1943) study. Most other innovations that have been studied do not have this extremely high degree of relative advantage. Many individuals, for their own good, should not adopt many of the innovations that are diffused to them. Perhaps if the field of diffusion research had not begun with a highly profitable agricultural innovation in the 1940s, the pro-innovation bias would have been avoided or at least recognized and dealt with properly.

Moreover, the outline of what he believes is the process by which innovations are generated is a very directed, top-down process. It involves “change agents” that are consciously attempting to solve problems and diffuse some innovations. I’m not arguing against the existence of such agents–they are obviously an extensive part of society, from medical researchers seeking a cure for cancer and pharmaceutical companies attempting to get their drugs mainstream adoption, to Apple coming up with a completely different kind of smartphone and tablet and bringing them to market.

But the change agents, as Rogers and the diffusion literature envision them, are only a part of Hayek’s story of social trial and error. Consider language–new words and phrases emerge all the time and diffuse through a process which I am certain is identical to the one Rogers describes. On the other hand, I highly doubt that there are “change agents” who developed these new words and phrases in a lab somewhere and then promoted them. I think the process is far more organic.

Rogers also discusses the role of norms in terms of how they hinder or help the diffusion of an innovation, but left unsaid I think is that those norms are themselves undoubtedly the product of a previous diffusion. In Hayek and Sowell’s framework, traditions and existing norms emerged in response to trade-offs that needed to be made throughout a culture’s history. As Edmund Burke put it succinctly in Reflections on the Revolution in France:

We are afraid to put men to live and trade each on his own private stock of reason; because we suspect that this stock in each man is small, and that the individuals would do better to avail themselves of the general bank and capital of nations, and of ages.

The trial and error process that Hayek envisioned built up that “general bank and capital of nations, and of ages” as societies developed increasingly effective ways to manage their trade-offs.

Rogers does touch on this point of view from a couple of angles. First, he describes the work of Stephen Lansing in uncovering the astonishing effectiveness of the local knowledge contained in the religious hierarchy of Bali, as he described in his book Priests and Programmers. This was a case where the seemingly beneficial innovations of the Green Revolution proved inferior to what seemed like mere superstitious practice.

The Balinese ecological system is so complex because the Jero Gde must seek an optimum balance of various competing forces. If all subaks were planted at the same time, pests would be reduced; however, water supplies would be inadequate due to peaks in demand. On the other hand, if all subaks staggered their rice-planting schedule in a completely random manner, the water demand would be spread out. The water supply would be utilized efficiently, but the pests would flourish and wipe out the rice crop. So the Jero Gde must seek an optimal balance between pest control and water conservation, depending on the amount of rainfall flowing into the crater lake, the levels of the different pest populations in various subaks, and so forth.

When the Green Revolution innovations were introduced to the region, crop yields dropped, rather than increased. This intrigued Lansing.

In the late 1980s, Lansing, with the help of an ecological biologist, designed a computer simulation to calculate the effect on rice yields in each subak of (1) rainfall, (2) planting schedules, and (3) pest proliferation. He called his simulation model “The Goddess and the Computer.” Then he traveled with a Macintosh computer and the simulation model from his U.S. university campus to the Balinese high priest at the temple on the crater lake. The Jero Gde enthusiastically tried out various scenarios on the computer, concluding that the highest rice yields closely resembled the ecological strategies followed by the Balinese rice farmers for the past eight hundred years.

Clearly, Balinese society had arrived at this optimal solution through some process. But Rogers does not delve too deeply into this.

Rogers also acknowledges that the literature may have focused too exclusively on more centralized processes.

In recent decades, the author gradually became aware of diffusion systems that did not operate at all like centralized diffusion systems. Instead of coming out of formal R&D systems, innovations often bubbled up from the operational levels of a system, with the inventing done by certain lead users. Then the new ideas spread horizontally via peer networks, with a high degree of re-invention occurring as the innovations are modified by users to fit their particular conditions. Such decentralized diffusion systems are usually not managed by technical experts. Instead, decision making in the diffusion system is widely shared, with adopters making many decisions. In many cases, adopters served as their own change agents in diffusing their innovations to others.

Though recognizing that such processes exist, it’s clear that the work that has been done on this is much thinner than the more traditional, change agent based research.

Questions That Remain

As I said, all three of these pieces have some holes in them, and those holes aren’t necessarily filled just by putting all of them together.

The next logical step would probably be to seek out more material like Rogers’, where a lot of work has been done and concrete conclusions can be drawn. Any work on how new words and phrases emerge and proliferate would probably be a good start.

Online communities also have many customs, such as hashtags on Twitter and the hat tip among bloggers. The advantage to customs like this is that they leave behind recorded evidence, unlike, say, an oral tradition. We know, for instance, when hashtags first became popularized among Twitter users–it is documented. A great deal of work is being done by communications scholars on subjects such as these; this could also probably provide some more solid leads.

What I want to argue is that innovations are generated in a Hayekian trial and error process, and some subset of them gain mass adoption in the manner described by the diffusion of innovations literature. I want to describe the role that local knowledge plays in that process; how the feedback mechanisms and incentives shape what innovations are generated and which ones ultimately are adopted.

But there’s more research to be done before I can make a case for this thesis that is solid enough for me to be comfortable with.

Homesteading the Open Web

Look at four other social things you can do on the Net (along with the standards and protocols that support them): email (SMTP, POP3, IMAP, MIME); blogging (HTTP, XML, RSS, Atom); podcasting (RSS); and instant messaging (IRC, XMPP, SIP/SIMPLE). Unlike private social media platforms, these are NEA: Nobody owns them, Everybody can use them and Anybody can improve them.

-Doc Searls, Beyond Social Media

Unlike Searls, Zittrain, and many others, I am not greatly bothered by the fact that a huge amount of our social interactions are taking place on privately owned platforms like Facebook or Twitter, and an increasing amount of stuff we used to use the web for is being done on privately owned platforms like iOS. From an economic point of view, I think it’s good for someone to have a vested interest in investing in these platforms.

It cannot be denied, however, that a user of Twitter is much more a tenant than a landlord; they can be kicked off without any reason whatsoever, if the company desires it. Moreover, the consolidation of such a large and distributed platform under one company gives it many of the characteristics of a technology of control. This is obscured by the fact that it has already been the tool of resistance in several countries; and certainly it isn’t straightforwardly one or the other. But Twitter is a single company that hundreds of millions of people are using as a communications platform; it therefore is one big target for regulators and tyrants the world over.

Consider: they recently announced that they created a way to censor tweets in specific countries without removing them globally. They knew that in order to enter certain markets, they would be forced to comply with some less than thrilling local regulations on freedom of expression. As a company, the decision was either to stay out of those markets or comply with the regulations. So they came up with an approach that wouldn’t allow local censorship to extend its reach globally, and they announced it before anyone asked them to use it in an attempt to preempt the bad PR this would inevitably bring.

That will never happen on a blog like mine.

I pay for server space and a domain name, and I use WordPress’s software. If the hosting company tried to mess with me, it is trivial to move to another one. If WordPress makes changes I dislike, or somehow builds tools for censorship into its code, I can swap it for Moveable Type or any number of alternatives. I regularly backup my data, so if someone seized the servers it was on I would not lose it.

In short, I have carved out a small piece of real estate in the open web.

Now, the advantages to platforms like Twitter are undeniable. No one is going to Twitter in order to see what the latest thing I have to say or share is; they go there because everyone they might be interested in hearing from is there. Most of the time this blog sees very little traffic, while I have conversations on Twitter and Facebook basically every day of the week.

Again, unlike Searl, I do not see the rise of these walled garden platforms as onerous. But I do think everyone should consider homesteading the open web; setting up something that is truly theirs that they can invest in over time.

That is part of the reason why, after seven years on Blogger, I decided to jump ship and start this site.

If you’re interested in this but aren’t sure how to proceed, my friend Lauren is offering to help people for free, if you sign up for a Bluehost account through her site. Many hosts have easy, one-click options for installing WordPress after you’ve paid for space, so you probably won’t even have to worry about the technical aspects of installation.

Just as there are benefits to having privately owned platforms, there are definite benefits to having something that you own from end to end.

 

A Tale of Two Audiences

Disclaimer: the following is simply a hypothesis, a story if you will. I submit it to you with no pretension of either originality or authority.

When content companies look at their incoming traffic, they divide them into two general categories–referral traffic and direct traffic.

They’re both fairly self-explanatory. Referral traffic comes from people who found you from somewhere else. The biggest category of this in general is search traffic, but links from Facebook or Twitter, or someone’s blog are also typical sources.

Direct traffic, as the name implies, is made up of people who go directly to the site. Often this means that they are loyal followers of your content.

A loyal user is much more valuable than someone who finds an article of yours from Google, gives it a look, and then never comes back again. They’re more valuable in the quantifiable sense that they see your ads a lot more than that drive-by search user, but they’re more valuable in less obvious ways as well.

The vast majority of the traffic to the big content sites is referral traffic. While the individual user may be less valuable than the individual in direct traffic, the total amount of revenue from referral traffic is much larger. This is the reason that the internet is rife with SEO spam and headlines that are more likely to get clicked if tweeted. Search traffic alone is a gigantic pie, and empires have been built on it alone, with practically zero direct traffic.

However, having a loyal user base that comes to your site regularly is extra valuable precisely because it is likely to get you more referral traffic. Consider: loyal users are more likely to share a link to content on your site from Twitter, Facebook, Tumblr, Redditt, or a good old-fashioned blog. This is exactly the kind of behavior that generated referral traffic–both directly from people clicking those links, indirectly from those people possibly sharing the links themselves, and yet more indirectly if they link from their blogs and thus improve your Google ranking.

Of course, even within your loyal base there is variation in how valuable a particular user is. Guy who has read the Washington Post for forty years and has just moved online is less valuable to the Post than someone like Mark Frauenfelder, who might link to one of their articles on Boing Boing, improving their Google ranking further and sending some of his traffic their way. But it’s still useful to think broadly about direct traffic vs. referral traffic.

The Porous Wall

Back in March, the New York Times launched something like a paywall. There are numerous particulars and exceptions, but the long and short of it is that someone like me, who only ever visits the Times when someone I know links to it, faces no wall of any sort. Meanwhile, someone who has loyally visited the Times every day for years will have to pay if they want to see more than 20 articles a month.

At the time, Seamus McCauley of Virtual Economics pointed out the perverse incentives this created: the Times were literally punishing loyalty without doing anything to lure in anyone else. Basic economic intuition dictates that this should mostly result in reducing the amount of direct traffic that they receive.

The Times did spend a reported $40 million researching this plan, and while I’ll never be the first person to claim business acumen on the part of the Times, you have to think they did something with all that money. As usual, Eli had a theory.

[blackbirdpie id=”52741320770469891″]

Imagine, for a moment, that all of the Times’ direct traffic was composed of individuals who were perfectly price inelastic in their consumption of articles on nytimes.com. That is, they would pay any price to be able to keep doing it. Assume, also, that all of the Times’ referral traffic was perfectly price elastic–if you raised the price by just one cent, they would abandon the site entirely. The most profitable path for the Times in this scenario would be, if possible, to charge direct traffic an infinite amount to view the site, while simultaneously charging the referral traffic nothing so they keep making the site money by viewing ads.

The reality is a less extreme dichotomy–though I wouldn’t be surprised if a significant fraction of the Times’ referral traffic did vanish if they tried to charge them a penny. Still, the direct traffic, while undoubtedly less elastic than the referral traffic, is unlikely to be perfectly inelastic.

Getting a good idea of just how inelastic would be a very valuable piece of information for the Times to have, and I think Eli is right that that is exactly what they spent the $40 million on–that, and devising the right strategy for effectively price discriminating between the two groups.

It’s too soon to tell if the strategy will work for the Times, or if it’s a viable strategy for any content company to pursue.

Come One, Come All

2011 also saw the birth of a new technology website, The Verge.

The Verge began after a group of editors from Engadget left, in the spirit of the traitorous eight, because they believed they could do it better than AOL would allow them to. During their time at Engadget, they had developed a following through the listeners of their podcasts and their presence on social media–Twitter in particular. They also were fairly active in the comment sections of their own posts.

In order to bridge the gap between the launch of the new site and the departure from the old one, they set up a makeshift, interim blog called This Is My Next, and continued their weekly podcast. This kept them connected with the community they had built around them and allowed them to keep building it while they were also working on launching The Verge.

There are a lot of things that I really like about The Verge. First, they bundled forums into the site and gave people posting in them practically the same tools that they use for the posts on the main site. The writers themselves participate in the forums, and any user’s post that they find particularly exceptional they will highlight on the main site and in The Verge’s various social network presences.

Second, they do a lot of long-form, image and video rich, niche pieces that may take time to get through but which just have a kind of polish which is rare among web-native publications.

When I told a friend of mine about how much I loved these pieces, he very astutely asked “but it costs like eleven times as much to make as a cookie-cutter blog post, and do you really think it generates eleven times more revenue?”

This question bothered me, because in a straightforward sense he seems to be right. Say that Paul Miller could have written eleven shorter posts instead of this enormous culture piece on StarCraft. There is no way that The Verge made eleven times as much in ad revenue from that post as they would from the eleven shorter posts he could have written.

But posts like that one attract a certain kind of audience. I may not read every long feature piece that The Verge does, but I like that they are there and I read many of them. The fact that they do those pieces is part of the reason that I made them my regular tech news read rather than Engadget or Gizmodo.

In short, the clear strategy being pursued by The Verge is to reward their most loyal audience, even if it doesn’t directly result in more revenue than trying to game search engines. There isn’t always tension between the two strategies–one of the sites features is called story streams, pages like this which make it easy to see how a particular story has unfolded so far. It also is one more page that could potentially show up in Google search results.

Still, having followed the group very closely at Engadget first, and now at the Verge, it seems clear to me that the core mission is to build up the loyal visitors. If a feature piece costs eleven times as much as a shorter one, but makes a loyal reader out of someone who indirectly brings 100 visitors to the site over time by sharing links on Twitter and Facebook, was the feature piece profitable?

The Verge is even younger than the Times’ paywall, so time has yet to tell if their approach is sustainable.

Farming Referral Traffic

At the onset of 2011, a big rumble was building in the tech community about Google’s search quality. Many claimed that it had become filled with spam. Google was not deaf to these criticisms, and argued that spam was actually at an all time low. The problem wasn’t spam, they argued, but “thin content”–and what have come to be called content farms.

The logic of the content farm is that with enough volume, enough of your pages will make it high enough in search results to get you enough referral traffic to make a pretty penny. In short, a content farm focuses its efforts on acquiring referral traffic and foregoes any real effort to build up direct traffic.

This in itself isn’t a problem, of course. If you have 500,000 of the greatest articles ever written by mankind, then you, Google, and search users are all better off if you rank highly in relevant search results. And many companies that took a beating when Google began targeting thin and low quality content for downranking have cried that they were unfairly lumped in with the rest just for having a lot of pages.

The content farm controversy aside, there is a clear and obvious place for a site that has an audience made up almost entirely of referral traffic–reference sites. Wikipedia, for instance, at one point received somewhere in the neighborhood of 90 percent of its visits from referral traffic. Though it is probably the biggest receiver of search traffic, it is not unusual in this regard–there is a whole industry of people whose livelihoods are made or broken by the various tweaks of Google’s algorithm.

The Path Forward

As I said, much remains uncertain. Five or ten years from now we’ll look back and be able to say which experiments proved to be the models for striking the balance between these audiences. For my part, I can only hope that it looks more like what The Verge is doing than what the New York Times is trying.