Norms and Freedom

In his latest book, Luigi Zingales asks why economists aren’t more willing to talk about what the optimal norms are for a successful economy, rather than focusing exclusively on what the optimal laws are. Over at Modeled Behavior, Adam Ozimek asks:

Is this a libertarian, conservative, or progressive idea? If you view the pressure of social norms as a way to restrict individual freedom, then this can easily be seen as progressive or conservative, depending on the behavior being restricted.

This question has a history behind it. In On Liberty, John Stuart Mill made it clear that he considered social stigma to be a form of coercion. This was especially so when it influenced who people were willing to do business with:

For a long time past, the chief mischief of the legal penalties is that they strengthen the social stigma. It is that stigma which is really effective, and so effective is it, that the profession of opinions which are under the ban of society is much less common in England, than is, in many other countries, the avowal of those which incur risk of judicial punishment. In respect to all persons but those whose pecuniary circumstances make them independent of the good will of other people, opinion, on this subject, is as efficacious as law; men might as well be imprisoned, as excluded from the means of earning their bread.

Thomas Sowell, a Hayekian, spent a fair amount of space in Vision of the Anointed criticizing Mill for his anti-stigma arguments. For Sowell and Hayek, norms are the very fabric of the social order. They come from a school of thought dating back to Edmund Burke, Adam Smith, and David Hume. While Mill shared much in common intellectually with the latter two individuals, on this subject he is much closer to Rousseau, who believed we were born free, only to be shackled by social conventions soon after.

This debate centers on different ideas of what coercion is. On Sowell’s side of the debate, there’s a fairly clear line–if you are doing something because of the explicit or implied threat of violence, you are being coerced. The threat of refusing to do business with someone is not coercion because no one is entitled to do business with anyone; the right to choose who I do business with is an inherent part of my freedom of association. The fact that I am choosing not to do business with you because you have taken some action or hold some belief that there is a social stigma against does not make it coercion, any more than if I was motivated simply by the fact that I think you are ugly or something.

How Norms Change

The ancient Greek sophist Protagoras argued that morality is something that human beings are constantly teaching to one another, similar to how we are constantly teaching each other language. The moral sense theorists, and more recently cognitive scientists and moral psychologists, have given us an idea of the mechanisms through which this co-learning occurs.

Most of the time we are taught to stick to a set of norms that has existed for a very long time. But moral change does happen.

Take the American Civil Rights Movement as an example. I do not think that its progress should be measured in the laws it managed to get enacted. Its progress should be measured in the extent to which it moved our moral framework.

Moral changes, like all social changes, start with small groups and spread in a diffusion of innovations-like process. Most such innovations never spread at all. This social trial and error form the basis of the engine of all institutional change, moral or otherwise.

As moral change follows the logic of the diffusion of innovations, we would expect successful revolutions to have the advantages predicted by that literature. The activists of the Civil Rights Movement did not just give speeches and publish books; they engaged in many forms of verbal and visual rhetoric, and took many dramatic actions, which put their perspective in the context of traditional American ideals and religious doctrine. Though their success constituted a change in the norms of the country, it was more likely precisely because they framed the change within preexisting norms.

The Limits of Individual Influence

If you think that you can affect great changes as a lone individual, you are setting yourself up for disillusionment. In all of social life, everyone is but a tiny part of a much larger whole. Even the President of the United States, and others with even greater discretionary authority, face constraints by the very nature of the systems they are working within. Individual impact varies dramatically, to be sure, but even the most exceptional individual’s influence will always be small compared to the scope of the system that is acting upon them. It is also probably reasonable to assume that it is highly unlikely you will become the Martin Luther King, Jr. of your particular moral movement.

Once we have given up on individual exceptionalism, we are left with the same tools that human beings have been using for as long as we have formed groups. You cannot hope to shape the moral compass of a nation with a single blog post, but you are influential within the group of 100 or so people you are most closely associated with, and especially the 15 or so people in your inner circle–see Paul Adams on this subject, and his book for a more thorough review of the literature.

You must also accept that this group will have as much or more influence on you as you have on them. In both how you influence and are influenced by them, your social groups are the venue for your participation in all social change, including moral change.

Participating in Change

Dan Klein once said that he felt like he shouldn’t be in GMU’s economics department, where there were plenty of people who already agreed with him, but instead should go to a more mainstream department where he could work to change minds. This is a misunderstanding of how minds are actually changed. If Klein went to such a department, he would probably just become marginalized within that community. Rather than increasing his influence, it would almost certainly reduce it.

At GMU, a community of libertarians has formed, and a culture has developed within the department. Students who go to grad school there are immersed in that culture while they are pursuing their degree. They integrate into and are influenced by that culture to varying extents. Many then take that culture with them when they move on to other things. This is not unique to GMU’s economics department–all academic departments develop a culture of some kind, which acts upon and is acted upon by the students that pass through it.

We tend to have a broadcast model of influence in our heads–we think that by writing blog posts and going on TV we will change people’s minds. But the vast majority of influence happens at the level of a community. This is true even in exceptional cases–Marginal Revolution may be an influential blog, but the economics blogosphere as a community has more impact overall on the parameters of the discussions than any one of its members. Tyler Cowen’s biggest individual impact on this discussion is as a member of a community of high visibility individuals, such as Paul Krugman and Scott Sumner.

The norms developed within the communities of which we are a part are then subject to the dynamics of the diffusion of innovations–they could gain mainstream adoption, they could remain niche, or they could hit some middle point between the two extremes. They could persist for long periods of time at whatever level they attain, or they could flame out quickly and disappear.

To the extent that you are encouraging certain norms within your community which could eventually diffuse beyond it, you are participating in the process of moral change.

Stories of Progress and Stagnation

I grew up around computers and have always taken it for granted that we lived in a time of enormous innovation and growth. Within my lifetime, my family has gone from something that looks like this:

To all individually having iPhones, which are enormously more powerful machines, connected to the Internet, and robust platforms for a huge variety of independently developed software. Never mind our various laptops and desktop computers!

It seems to me that from around the point that the term “web 2.0” was coined to the market crash of 2008, the story about the state of things that most people accepted was the one I was inclined to accept by default; that we lived in an era of accelerating progress. That every year would see huge leaps over the previous year, and the year after that would see a leap of similar relative magnitude, and this would go on indefinitely.

There have always been stagnationists, but it’s only in the last couple of years that stagnation stories have started to become fashionable again. Tyler Cowen deserves no small amount of credit, as The Great Stagnation made an enormous splash when it came out in January of last year. While discussions of the recession up until then had been made up almost entirely of diagnosing the financial bubble, post-TGS discussions had to face the possibility that our present predicament might be part of larger, more structural trends. Regardless of whether the book changed anyone’s minds directly, there can be little doubt that it played a huge role in setting the agenda.

The debate that has emerged has fascinated me, both as someone who is deeply interested in our propensity to tell stories, and simply because it is extremely hard to determine who is correct.

The Death of Ambition and the Modern Game of Inches

Tyler Cowen credits PayPal founder and venture capitalist Peter Thiel with inspiring the story behind The Great Stagnation. Recently, Thiel debated Google Chairman Eric Schmidt on the subject of technology and progress. One section of that debate that made the rounds in the economics blogosphere concerned Google’s $50 billion in the bank.

Thiel argued that “if we’re living in an accelerating technological world”, Google should be able to invest that $50 billion in technology in a way that returns their investment many times over. Even if Googlers are claiming that we live in an era of progress, their actions speak to a more pessimistic assessment.

Thiel believes that we live in a deterministic world in which progress is made by making big bets on enormous projects. Part of the reason we no longer pursue some ambitions is that we have all become indeterminists; our resources are all tied up in hedging against uncertainty. Even though the tech sector is characterized by progress so stable and relentless that we refer to several specific trends as “laws”, the players are, if anything, more indeterminist in their worldview than average.

Google’s low-yielding $50 billion is the ultimate symbol of this. Google made nearly $10 billion in profits in 2011, and almost all of that came from search, their core product. Thiel’s argument is that if Google believed that we lived in a time of accelerating technological progress, where $10 billion a year breakthroughs were just lying around waiting to be invented, they would be spending every penny they had on attempting to make those breakthroughs happen.

More important than the cultural change, however, is the fact that public policy has systematically outlawed ambitious projects of any sort. From the debate with Schmidt:

The why questions always get immediately ideological. I’m Libertarian, I think it’s because the government has outlawed technology. We’re not allowed to develop new drugs with the FDA charging $1.3 billion per new drug. You’re not allowed to fly supersonic jets, because they’re too noisy. You’re not allowed to build nuclear power plants, say nothing of fusion, or thorium, or any of these other new technologies that might really work.

So, I think we’ve basically outlawed everything having to do with the world of stuff, and the only thing you’re allowed to do is in the world of bits. And that’s why we’ve had a lot of progress in computers and finance. Those were the two areas where there was enormous innovation in the last 40 years. It looks like finance is in the process of getting outlawed. So, the only thing left at this point will be computers and if you’re a computer that’s good. And that’s the perspective Google takes.

Further down, responding to criticism of the financial sector, he adds:

I disagree with the premise behind the question that there’s some sort of tradeoff between finance and other areas of innovation. I think it’s easy to be anti-finance at this point in our society, and I think the reality is we have an economy that got very lopsided towards finance, but it’s fundamentally because people weren’t able to do other things.

So, if you ask why did all the rocket scientists go to work on Wall Street in the ’90s to create new financial products, and you say well they were paid too much in finance and we have to beat up on the finance industry, that seems like that’s the wrong side to focus on. I think the answer was, no, they couldn’t get jobs as rocket scientists anymore because you weren’t able to build rockets, or supersonic airplanes, or anything like that. And so you have to ‑‑ it’s like why did brilliant people in the Soviet Union become grand master chess players? It’s not that there’s something deeply wrong with chess, it’s they weren’t allowed to do anything else.

In short, we have grown risk averse in both our culture and in our policy.

Science fiction writer Neal Stephenson is firmly in the stagnationist camp, and he definitely believes it is all about risk aversion. He has written:

 Innovation can’t happen without accepting the risk that it might fail. The vast and radical innovations of the mid-20th century took place in a world that, in retrospect, looks insanely dangerous and unstable. Possible outcomes that the modern mind identifies as serious risks might not have been taken seriously — supposing they were noticed at all — by people habituated to the Depression, the World Wars, and the Cold War, in times when seat belts, antibiotics, and many vaccines did not exist.

In Stephenson and Thiel’s story, true innovation is risky, bold, and visible, while what passes for innovation in modern times is peanuts by comparison. Stephenson pointed to the ongoing competition to build the world’s tallest building as an emblematic example of the problem. These days the tallest building in the world is only a few inches taller than the previous record-holder, and only holds the record for a few months as another slightly taller building is always being constructed in near parallel.

What Stephenson wants is for us to build a structure several orders of magnitude larger than anything that’s ever been built before; a structure that will hold the record for decades before it becomes technologically possible or financially conceivable to surpass it. To Stephenson as well as Thiel, that is what innovation should look like.

The stagnationist has no problem with the ground game, but is frustrated that there doesn’t seem to have been any passing game in forty years. Meanwhile everyone is going around presenting the incremental gains as though they were big breakthroughs. Neither Stephenson, Thiel, nor indeed Cowen, are impressed. You talk about all the wonders we’ve seen since the mass adoption of the Internet, but have they really moved the needle? Just think about penicillin, anesthetics, the automobile and the airplane, not to mention all the spillover innovations that came from putting a man on the moon!

At Founder’s Fund, the venture capital firm at which Thiel is a partner, they have a saying: “we wanted flying cars, and instead we got 140 characters.”

The Value of the Unseen

There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen.

-Frederic Bastiat, What Is Seen and What Is Not Seen

I see a lot of truth in pieces of the arguments made by Thiel, Stephenson, and Cowen, but am uncertain whether I buy into all of it. My natural inclination has always been to dismiss stagnationist stories, and Stephenson’s fixation with big, visible things made me all the more skeptical. The stories I have grown close to over the years frequently point out how what seems to be plain truth is often, when you take a step back, a lot less clear and sometimes completely wrong. You think, for instance, that making something as simple as a pencil is the easiest possible task, but it turns out that there’s this huge process behind it in which no individual has enough knowledge to assemble a single pencil.

Take GDP as an example. It’s a nice point of reference, but if you start assuming that GDP–or even GDP per capita–is synonymous with national wealth, you run into some serious problems. GDP is essentially just aggregate spending. When you buy an iPhone for $199.99, you are adding $199.99 to this year’s GDP. It’s a great proxy for national income but it has many recognized problems. In what is perhaps a dated and vaguely sexist sounding example, Paul Samuelson came up with the following scenario:

Take Samuelson’s example of the man marrying his maid. Samuelson’s point is that the new bride continues doing the housework without being paid. But that would not mean that the work suddenly had no market value. So, in this case, GDP actually understates the market value of all final goods and services because this particular service is no longer exchanged on the market.

The valued activity–the housework–is still being done, but because there isn’t any spending involved, it isn’t measured in GDP.

Bryan Caplan has pointed out repeatedly that the consumption done on digital devices and on the Internet is hugely mismeasured by metrics like GDP. In one post, he points out one implication of all the various network products seeing success in the market today:

In the real world, network goods visibly improve all the time. But suppose they didn’t. Suppose the Facebook of today used the same source code as it did five years ago, but still attracted new users at the same rate as it did in the real world. Many economists would be tempted to call this “stagnation,” but they’d be wrong. Even if Facebook’s source code stayed the same, the mere fact that more people are using the product causes it to be better. Why? Because the point of the product is to amusingly interact with your friends. The more friends who use it, the more amusing it is.

The upshot: Economists (and people generally) underestimate true economic growth for all expanding network products. When you measure the quality of network products, you can’t simply look at them in isolation. You have to measure what you can do with them.

There are many dimensions in which Caplan argues that our measurement biases are worse than ever, but our standard of living is actually better than ever.

Looking at my own daily life, a huge amount of my consumption is simply not counted in GDP. I consume an enormous amount of content without paying anything for it. There’s also the reverse benefit–I can write lengthy posts like this one and put them in a public place, whereas before the Internet only the lucky few who managed to get published could do anything roughly equivalent.

If we are a groupish species, and I believe we are, then the ability to connect with others and increase the number of our shared experiences is a huge benefit. Clay Shirky’s excellent book, Here Comes Everybody, discusses how modern technology has reduced the transaction costs associated with group action, the benefits of which we are only beginning to understand. In his followup, Cognitive Surplus, he described how central hubs like Wikipedia are able to aggregate a few minutes of effort from enough sources to result in one enormously valuable resource.

Even after The Great Stagnation, many defend the story that progress is accelerating. In Race Against the Machine, Erik Brynjolfsson and Andrew McAfee argue that technological innovation has been going at a breakneck pace for decades, and we’re only now entering the second half of the chessboard. Yet their vision of progress has a caveat–we are currently at a moment where technology is replacing humans in performing certain tasks faster than entrepreneurs are coming up with new jobs that humans are better at than machines. Arnold Kling said it best:

 The paradox is this. A job seeker is looking for something for a well-defined job. But the trend seems to be that if a job can be defined, it can be automated or outsourced.

Still, overall well-being is going way up as machines become much, much more efficient at providing us with things that we value for rock bottom prices. So on net, we’re seeing tremendous progress.

Radical Uncertainty

Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

-Nassim Nicholas Taleb, The Black Swan

If our culture has embraced indeterminacy, or more accurately uncertainty, as Thiel thinks we have, then Taleb has taken this story farther than anyone. Whereas Thiel will argue:

 Several people have successfully started multiple companies that became worth more than a billion dollars. Steve Jobs did Next Computer, Pixar, and arguably both the original Apple Computer as well as the modern Apple. Jack Dorsey founded Twitter and Square. Elon Musk did PayPal, Tesla, SpaceX, and SolarCity. The counter-narrative is that these examples are just examples of one big success; the apparently distinct successes are all just linked together. But it seems very odd to argue that Jobs, Dorsey, or Musk just got lucky.

Taleb has no compunction with arguing that they got lucky–or, at the very least, that we are incapable of determining the difference between pure luck and its opposite. In Fooled By Randomness, he conjures up a scenario in which an eccentric rich person will pay $10 million to whomever wins a game of Russian Roulette. Someone might get lucky and win, but if they keep playing, the odds will eventually catch up with them. However, if the pool of players is large enough, you will get a handful of consistent winners even after many rounds of playing the game.

In addition, in time, if the roulette-betting fool keeps playing the game, the bad histories will tend to catch up with him. Thus, if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim chance of his surviving until his fiftieth birthday–but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).

What you always miss out on when citing examples of people like Steve Jobs whose success seems so improbable at the individual level is that, with a big enough “cemetery” of people making similar attempts but failing, the probability of having a few people like him increases. Moreover, after the first success there is some preferential attachment, so to speak–while most startups that get funding do not succeed, the vast majority of startups don’t get any funding. Jack Dorsey’s first success increased the odds that even a stupid sounding idea would get funding the next time around, which increased his odds of succeeding. Now, there are a lot of people in a similar situation who did not then go on to have another success, but again, if the cemetery is big enough, you will end up with a few Jack Dorseys.

Again, the point is not to argue that everything is pure luck. The point is that the role that randomness plays in anything is unknowable. We have stories that persuade us to a greater or lesser extent, but in the end there is enormous uncertainty. Take the very debate over whether we are in a stagnation or a period of accelerating progress. The debate is very robust; with a great deal of evidence brought to bear on both sides of the argument. And everyone can think of alternative stories to fit the data–when I brought up Thiel’s conclusions about Google’s large cash horde, people immediately came up with alternative interpretations.

In Taleb’s world, progress and ill fortune are not smooth trendlines in either direction; they are lumpy. You get big, sudden breakthroughs, and huge, unexpected catastrophes (think of the turkey). So it can seem for a very long time like we’re going in either direction, and then one dramatic event today can have more of an impact on our well being than the past thirty years combined. In a way, the relatively short period since the onset of the Industrial Revolution is a big, dramatic event in the timescale of human history, and there is no guarantee that it will last. The progress could stop tomorrow, or the gains could be completely reversed by some countervailing dramatic event–say, nuclear war or a particularly virulent disease. Or, conversely, we could be at the foothill of a positive breakthrough of such a magnitude as to make the past 200 years look like nothing. There is simply no way to say.

F. A. Hayek was also a proponent of radical uncertainty; he believed that the only possible path to progress was through rote trial and error. It is possible to do the big things that the stagnationists want to see, but you’d better be prepared to see some colossal failures along the way. This begins to look more like Stephenson’s story about the role of risk, and there is certainly some overlap here.

But Thiel’s deterministic worldview is well outside of that overlap. Contra Thiel, the economist Frank Knight believed that the world is filled with irreducible and unquantifiable uncertainty. What’s more, Knight believed that progress was made and profit was found by entrepreneurs who deliberately sought out niches that had high degrees of uncertainty.

In this story of uncertainty and lumpy progress, Google’s $50 billion makes a lot of success. In a direct response to Thiel, Arnold Kling pointed out that under high uncertainty there is a high option value to waiting to invest.

Picture two possible scenarios–one in which Google develops the next big breakthrough in-house, another in which someone else develops it and Google acquires them. Google is clearly pursuing a lot of the former–famously, they are developing wearable computing and they have already clocked hundreds of thousands of miles on their fleet of automated cars. But their tens of billions of dollars in the bank suggests that they believe the big breakthroughs are going to come from outside of Google, rather than through their internal process.

This is frustrating to a hard determinist like Thiel who thinks we should be able to see what’s coming down the road and simply invest that $50 billion in it. But ultimately this is no different than any other make or buy decision that firms face; and how that split is made is a question that economists have analyzed since Coase. The fact that Google is sitting on so much money, from the perspective of this particular story, does not imply that they think we’re in the middle of a stagnation. Rather, it implies that they believe the market is more likely to supply the next $10 billion a year breakthrough than their own internal processes. That could speak to the weakness of their internal processes, or it could simply mean that the market is that much better at developing big breakthroughs than a single corporation could ever be.

Alex Tabarrok asked who will make the future if Google is just waiting for it. The answer provided by this story is that many players, in many firms, scattered across the market and across time will make the future, and many will do so in the hopes of a big payday from Google.

Cycles of Control and Resistance

This is the last story that I will examine here, and it comes from my former classmate Eli Dourado.

To really understand Eli’s story, you have to understand his larger framework. Despite the fact that economically-saavy libertarians believe very strongly in the power of incentives, most still seem to harbor the notion that the practical path forward for policy reform is through persuasion. And there is a story to be told in which this strategy has seen some success, with the neoliberal revolution for example.

In Eli’s framework, the incentives against governments adopting libertarian policies in a broad way are simply too powerful to overcome in the long run. Think about the big spam botnets. Botnets build up over time and become a low cost way to send people spam emails. After a while, one or two botnets will account for the vast majority of all spam. Security groups will get together and work to get one of the top ones taken out, and it will result in a big short term payoff–a recent takedown resulted in an estimated 50% drop in spam.

But the cost of building up a botnet is low enough, and the payoff for spam with an infinitesimal success rate is so high, that it doesn’t take long before the volume of spam is right back to where it was before the takedown. In Eli’s world, most good policies are like botnet takedowns–short term gains but a wash in the long run.

With that in mind, here’s is Eli’s more specific story about innovation:

First we need to differentiate between two kinds of innovation and think about their effects. The first kind of innovation is geared toward brute maximization of production. It is typically centralized and makes use of economies of scale. Examples might include an assembly line factory or a big, coal-fired power plant. Because these innovations tend to be centralized, they introduce points of control. The capital is typically fixed and therefore easy to tax and regulate. It’s well known in the development literature that it’s really hard for governments to control rural peasants who live off the grid. Once they move to the cities and plug into centralized services, it is easier to require them to send their children to school, for instance. Because these innovations introduce points of control, I will call them technologies of control.

On the other hand, not all innovations are about brute maximization of production. Some are about producing things that we already know how to produce in ways that have ancillary benefits. An important ancillary benefit is evading control. Examples of these innovations include 3D printers and solar power. The evasion of control that is possible with 3D printers is the subject of Cory Doctorow’s short story Printcrime. And portable solar power cells can make people harder to control by supplying electricity without the need to register an address, have a bank account, stay put, and so on. These are obvious examples, but control can be evaded through more subtle innovations as well. I will call innovations that circumvent points of control that can be used by governments or monopolies to exploit, tax, or regulate technologies of resistance.

Eli explicitly splits the difference between The Great Stagnation and Race Against the Machine. He posits that the Industrial Revolution was all about the technologies of control–people clustered into dense urban populations, and were employed in mass numbers by factories that produced on a scale that was unprecedented in human history. We saw massive improvements in the standard of living of industrializing nations in the blink of an eye.

But all the concentation and the mobility-reducing high capital costs made the sources of our new wealth easy targets for governments to come in and take a bigger and bigger cut. Beyond straight taxation, interest group pressures also created an incentive to exercise specific forms of control through government regulation, reducing the effectiveness of the technologies of control.

Still, the productive capacity of these technologies was such that we coasted all the way into the 1970’s before the deadweight of government regulation and taxation slowed us down. Since then, our resources have shifted to developing technologies of resistance, which is why Brynjolfsson and McAfee see accelerating innovation. It is accelerating, but it’s accelerating in a very specific area because of how difficult it is to control that particular area.

We do see welfare gains from innovation in the technologies of resistance, but they are not nearly as big as we could get with the technologies of control, were they not so bogged down with regulation. Resources are spent on creating robustness against control that would have otherwise been spent on maximizing pure economic growth, in the absence of efficiency-reducing regulation.

In this story, ideology, persuasion, and democracy will not help us. Every time the median voter swings more libertarian, we see the technologies of control begin to give us bigger gains again. But, like the botnet takedowns, it is only a matter of time before the regulations creep back in again. And we almost never see anything comparable to a botnet takedown in terms of orders of magnitudes–we see some small reforms that may be bigger or smaller in impact, but we’re talking 1% or 2% improvements, not 50% or 75%.

The only way to move to a better long run path is to change something fundamentally structural. Eli imagines an extreme version of such a change in his post on the utopia of infinite elasticity.

It’s tempting to think that the bond market is powerful because of corruption, but that is at most a proximate source of power. The real source of power is elasticity. The supply of financial capital is highly elastic; it moves around the globe in milliseconds. Try to tax it and the incidence of the tax will go elsewhere; burden it with regulations and it will flea to a more hospitable climate.

Imagine a world in which all factors of production were as mobile and elastic as financial capital. If labor and physical capital could flea instantaneously and at low cost from bad policies, there would be little danger from either the predatory or incompetent state. In short, it would be a libertarian utopia.

As with any ideal, Eli does not believe that such a world is possible to get to, but he does think that we can move closer to it. Maybe, rather than simply developing specific technologies of resistance, we can build a whole infrastructure of resistance. Maybe mass adoption of 3D printing and wireless mesh networks helps move us to a much more elastic world.

Otherwise, we will just be stuck in this race against coercion where we eek out progress in inches rather than big leaps. We may occasionally widen the gap, or set back coercion with the reform movement of the moment, but we’ll never see the enormous gains of the early Industrial Revolution on a regular basis again. In this story, you can take everything that Cato, the Hoover Foundation, and even Milton Friedman accomplished, and throw them in the garbage, and you won’t see much of a difference in the long run.

Instead of investing in lobbying, we should be investing in an infrastructure of resistance.

I have to admit that I find this to be the most fascinating story of all.

Groupishness and Video Game Economics

The world of PC video games is currently ruled by Valve, through their digital game store Steam, which boasts some 40 million users. Part of their success can be credited to their practice of providing heavy discounts on games that are a few months or a year old.

Rival company EA claims that this practice helps intermediaries like Steam while hurting the game developers who have invested a lot of resources into making quality products. David DeMartini, head of Origin, EA’s alternative to Steam, claims that such discounts “cheapen the intellectual property.” He then suggests that the system creates perverse incentives:

One criticism some have labelled at Steam is that its heavy discounts damage video game brands because gamers hold off on buying new releases at launch in anticipation of a future sale.

DeMartini agreed with this position: “What Steam does might be teaching the customer, ‘I might not want it in the first month, but if I look at it in four or five months, I’ll get one of those weekend sales and I’ll buy it at that time at 75 per cent off.’

Valve responded that DeMartini’s claim does not match the facts. Business development chief Jason Holtman first points out that, as game developers themselves, they eat their own dogfood.

We do it with our own games. If we thought having a 75 per cent sale on Portal 2 would cheapen Portal 2, we wouldn’t do it. We know there are all kinds of ways customers consume things, get value, come back, build franchises. We think lots of those things strengthen it.

In order to understand why a discount later might not impact sales today, you need only two simple concepts: time preference, and what I’ve called fanboyism and Jonathan Haidt calls “groupishness”.

The Value of the Now

I am continually impressed by the firm grasp of economic theory that public facing representatives of Valve always seem to have–even before they brought on an actual economist. In this case, Holtman clearly gets time preference.

For instance, if all that were true, nobody would ever pre-purchase a game ever on Steam, ever again. You just wouldn’t. You would in the back of your mind be like, okay, in six months to a year, maybe it’ll be 50 per cent off on a day or a weekend or during one of our seasonal promotions. Probably true. But our pre-orders are bigger than they used to be. Tonnes of people, right? And our day one sales are bigger than they used to be. Our first week, second week, third week, all those are bigger.

When asked to comment on why Steam customers are behaving the opposite of how we would expect them to, given the incentives, Holtman states “the trade-off they’re making is a time trade-off.”

Time preference is the term economists use to describe the phenomenon whereby individuals are willing to pay more for something in the present than they would be at a later date. There are a lot of reasons why something might be more valuable sooner rather than later. There’s always an element of uncertainty–you know they’ll discount any apples the store has left tomorrow, but what if they run out entirely before that? You know Valve will discount a game by a huge amount in a few months, but what if Valve goes out of business before then? What if you lose your hands before then and are unable to play video games ever again?

There are other reasons as well, which are more idiosyncratic. In an era before refrigeration or pasteurization, a bottle of milk worth five dollars today might be worth zero dollars in a week. But it wouldn’t make any sense to wait a week in order to get five dollars off, because it will have spoiled by then.

It is not intuitive on the face of it that video games should have steep discount functions. After all, video games do not spoil, and the uncertainties surrounding their future purchase aren’t much different than a lot of goods with less dramatic discount functions. So what’s going on here?

Gamer Tribalism

Following that argument, nobody would ever go to a first run movie ever again. Even now, as DVDs come out even faster, you’d just be like, heck, I’ll just wait and get the DVD and me and 10 friends will watch it. But people still like to go to theatres because they want to see it first, or they want to consume it first. And that’s even more true with games.

In The Righteous Mind, moral psychologist Jonathan Haidt describes how human beings are inherently group-oriented. A lot of things that we like to think we prefer because of some inherent property we actually like because of how it connects us with other people.

For simplicity’s sake, let’s say that a consumer’s valuation of a given good can be split cleanly into two parts–the value they gain from it as an individual, and its prosocial value.

In video games, the individual value would come from most of the obvious things–how fun it is to play, how challenging it is, how good the art is and how well the story is written.

The prosocial value would come from having it as a topic of conversation with all the other people who are currently playing it or only recently finished it. Anyone who bought any of the Harry Potter books near launch day knows what this is like; everyone wanted to get and read the latest book as soon as it came out so that they could immediately turn around and talk to their friends about it.

In video games there is also the added prosocial value of being able to play with other people at parties or online, and being able to connect with new people in the game.

I would argue that the individual value of a game for practical purposes never changes. To the extent that it is driven down by an increase in substitutes over time, it decreases much more slowly than the prosocial value. Much of the prosocial value is created by the fact that everyone expects everyone else to jump at a game when it is brand new; this doesn’t last long as the group then moves on to the next new thing.

So how much of the value that most consumers get from a game is prosocial, and how much is for the inherent joy of playing the video game itself?

Well, if Valve is to be believed, then the prosocial value makes up as much as 50 or 75 percent of consumer’s valuation of most games. That is an enormous fraction, and I have to wonder how much it is representative of consumer valuation more broadly.

Holtman does seem to indicate that at least some of the value is individual:

Now you can do things like say, I never did own XCOM. Maybe I should buy that for $2 or $5 and pick it up. Or I didn’t get that triple-A game from three years ago, maybe I’ll pick that up on a promotion. And that’s making people happier.

But even here there’s a prosocial element–he states that the ability to get something late for cheap is actually “making them more willing to even buy the first time release.” In other words, if you didn’t get in on Portal 1 when it came out, but had a bunch of friends who were, you can “catch up” now for cheap, and then when Portal 2 comes along you’re more likely to pay the premium to be part of the group.

A lot of people in behavioral economics and moral psychology take their findings to be at odds with standard economic models. But I have always seen them as complementary; as giving us a much better idea of how subjective values are arrived at in the real world. I also share Yanis Varoufakis’ optimism that digital systems like Steam will provide even more insight into human nature than traditional social science experiments or data mining ever could.

In short, it’s a very exciting time to be interested in social science. Also, an exciting time to be a gamer!

Two Arguments in Defense of Unpaid Internships

I’ve heard a lot of arguments about how unpaid internships are evil or a form of exploitation. Recently I heard some of those arguments brought up again, and I decided it was time to stand up for this much maligned position.

An Argument from Principles

Let me lay out a few scenarios for you:

  1. Five people start their own individual blogs, writing several posts a day and making them freely available online.
  2. Those five people decide they want to ditch their individual blogs and all blog together in a group blog.
  3. A small paper offers to have the five of them blog under their banner, but does not pay them to do it.
  4. Instead of 3, the five people ditch their blog and get unpaid internships at the small paper.

I would venture that most people who think there is something wrong with unpaid internships don’t think that there’s anything wrong with 1-3 above. What I would like is for those who believe there is something wrong with number 4 to explain to me what distinguishes it, morally, from 1-3. Because I can’t see it.

I suppose they could argue that there is some distinction between number 4 and the unpaid internships they don’t like. I welcome people to explain to me if that is the case.

So my first argument is simple: if there’s nothing wrong with 1-3, and there’s no moral difference between them and 4, then, in principle, there is nothing wrong with unpaid internships.

An Argument from Consequences

You can’t really talk about consequences without making a bunch of assumptions about what good consequences are. So I’m going to tread carefully here, but I think the assumptions I’m going to make are pretty reasonable and widely shared.

Of course you can’t even talk about consequences without some idea of how the world works. Rather than pretending to know more than I do, let me lay out a few more possible scenarios for you:

  1. If the small paper is forced by law to pay their interns, it won’t keep any of them; our five individuals will not be associated with the paper at all.
  2. If the small paper is forced by law to pay their interns, it will pay one of them a paltry amount and get rid of the other four.
  3. If the small paper is forced by law to pay their interns, it will pay three of them a paltry amount and not the other two.
  4. If the small paper is forced by law to pay their interns, it will pay all of them.

I think some of the people who are against unpaid internships think that the world works in such a way to make number 4 possible. I, on the other hand, tend to believe that the world looks more like 1 or 2 than 4.

This belief of mine is subject to debate, of course. For now, all I’ll say is that in the news business in particular, margins are so low that I have to think we’re much closer to 1 than we are to 2, for that particular industry at the very least.

People take unpaid internships because they gain something from them; whether it’s experience, exposure, or the fact that it’s more prestigious to have a professional publication on your resume than a personal blog. Taking the unpaid internship makes them better off, at least in the long run. Taking a paid internship obviously makes them even better off, but they won’t always have the luxury of that choice.

Readers are also better off when they have more pieces to read that they enjoy. If fewer unpaid interns at the paper means fewer enjoyable pieces for the readers, then getting rid of unpaid interns makes them worse off.

So my second argument, while slightly longer in buildup than the first, is still quite simple: if we live in a world that looks like number 1 or 2 above, and arguably even 3, then getting rid of unpaid internships makes the potential interns as well as potential beneficiaries (in this scenario, readers) worse off.

EDIT: Patrick Delaney came back with a scenario that definitely merits discussion:

https://twitter.com/pxdelaney/status/218856415765344256

And discuss it we did–you can see the whole conversation here.

Fragility and Feedback

We have been fragilizing the economy, our health, political life, education, almost everything… by suppressing randomness and volatility.  Just as  spending a month in bed (preferably with an unabridged version of War and Peace and access to The Sopranos’  entire eighty six episodes ) leads to muscle atrophy, complex systems are weakened, even killed when deprived of stressors.

-Nassim Taleb, Antifragility (draft of prologue)

Such protectionist policies enforce stability at the cost of stifling both resilience and progress. They eliminate the checking process essential to trial-and-error learning, the way by which we identify the “failures” that new forms might correct.

-Virginia Postrel, The Future and Its Enemies

Google’s server architecture is very robust against failures. The quality of the company’s products, and their bottom line, depend on their ability to process enormous amounts of data without interruption and with a low risk of losing any of it. The danger is not hypothetical–companies have been wiped out because some freak accident they were unprepared for destroyed a large fraction of the data they relied on.

Steven Levy’s book on Google makes it clear that they were forced to become robust by their circumstances. Most companies at the time would pay for expensive, high-end servers that had a very low rate of failure. Google did the opposite–they went for inexpensive servers with an extremely high rate of failure. In order to survive, they had to create software for their servers that would preserve their data and keep their workflow from being interrupted even as servers failed left and right.

Google owes their resilient infrastructure to the fragility of their early servers.

In an active quest for resilient infrastructure, Netflix imposed disorder by design upon their servers.

Imagine getting a flat tire. Even if you have a spare tire in your trunk, do you know if it is inflated? Do you have the tools to change it? And, most importantly, do you remember how to do it right? One way to make sure you can deal with a flat tire on the freeway, in the rain, in the middle of the night is to poke a hole in your tire once a week in your driveway on a Sunday afternoon and go through the drill of replacing it. This is expensive and time-consuming in the real world, but can be (almost) free and automated in the cloud.

This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables — all the while we continue serving our customers without interruption. By running Chaos Monkey in the middle of a business day, in a carefully monitored environment with engineers standing by to address any problems, we can still learn the lessons about the weaknesses of our system, and build automatic recovery mechanisms to deal with them. So next time an instance fails at 3 am on a Sunday, we won’t even notice.

Netflix understands that failure is feedback. Until something goes wrong, they won’t be able to figure out what problems exist in their ability to cope with failure. So rather than resting no their laurels, they put themselves through a constant trial by fire to force themselves to be ready and improve their system. It is no different than getting small doses of a disease or poison in order to build an immunity, or working your body out above and beyond the demands your life makes on it in order to increase its fitness. There are many things in human life where stressors are a prerequisite for improvement–or simple maintenance.

Yet stressors are precisely what we seek to hide from in the world of policy. It is my contention that we are too terrified of short term risk and volatility in this country. Rather than embracing Chaos Monkeys of our own, we simply keep a spare in the back of the car and assume everything will go well if we ever have a flat. The only way to grow stronger, wealthier, and more resilient in the long run is to expose ourselves to a lot more risk and volatility than we have lately shown a willingness to cope with.

Deafening Ourselves

It’s not my purpose to single out the environmental movement, but that does embody a certain mentality about risk that has become so tied up in intellectual knots that it has the net long term effect of making things more risky. It is my thesis that a small number of people have to be willing to shoulder greater risks in order to create changes that eventually reduce risk for civilization as a whole.

Solve for X: Neal Stephenson on getting big stuff done

Stephenson’s point about risk is part of his larger argument that innovation in this country has stagnated, a view he shares with Tyler Cowen and Peter Thiel, among others. Putting his general conclusion to the side, I think the importance he places on at least some subset of the population needing to shoulder more short term risk to reduce overall long term risk is absolutely true.

Instead, we take measures to “manage risk”, deafening ourselves to feedback in the process.

For example, there are risks associated with allowing people to build what they want on the property that they own. They could introduce something that disrupts the neighborhood, either by taking up all the parking, or making noise, or both. So we have zoning laws, building permits, and various business licenses. As a result, real estate supply cannot respond to the massive demand for city living, and prices skyrocket.

Moreover, fewer business experiments are possible when everything has to fit a cookie-cutter business license. In Fairfax County, Virginia, a small theater had to wait nearly a year to open because the county had never had a theater before and wasn’t sure how to license one. That’s an enormous opportunity cost to impose on an operation of that size.

The political process through which license or zoning categories can be changed, and permits are issued, is extremely slow to respond to changes on the ground. While a more open system would hear the demand for denser development as loud as a scream, we’re so busy protecting ourselves from short term disruptions that we have essentially left ourselves deaf to it, and to all the potential beneficial innovations that could have happened.

This is no academic point; the toll of this aversion can be measured in wealth as well as lives. Nothing is more emblematic of our attitudes towards risk than the 12 year, multimillion dollar process that new drugs must go through before the FDA allows them to go to market. This lag has led to countless unnecessary deaths (PDF), not to mention making new drugs enormously more expensive once they finally do reach the market. And the ability of the FDA trials to even truly keep us safe is questionable–the data are not really random, and any effect that might seem small for a sample of thousands might never the less effect a huge number of people once it hits a market of millions.

The bottom line is that there are things that cannot really be known until you take the drug to market. Doctors should have to perform their due diligence of informing patients of the risks and unknowns, but delaying entry by over a decade and piling on enormous costs accomplishes very little. Unless your goal is to drastically reduce the number of new treatments we are capable of discovering per year.

We put off the short term risks and increase our long run costs.

Ditch Stability

The economy, politics, and job market of the future will host many unexpected shocks. In this sense, the world of tomorrow will be more like the Silicon Valley of today: constant change and chaos. So does that mean you should try to avoid those shocks by going into low-volatility careers like health care or teaching? Not necessarily. The way to intelligently manage risk is to make yourself resilient to these shocks by pursuing those opportunities with some volatility baked in. Taleb argues— furthering an argument popularized by ecologists who study resilience— that the less volatile the environment, the more destructive a black swan will be when it comes. Nonvolatile environments give only an illusion of stability

-Reid Hoffman and Ben Casnocha, The Start-up of You

We need more risk and volatility, and we need to give up our fruitless quest to hide from them.

In many ways this quest reflects a lack of historical perspective. We bail out the US automakers again and again because they were once the symbol of American greatness, and we think that once they are gone we will never shine again. Yet we forget that at the turn of the 20th century, 41 percent of our labor force was employed in agriculture, and at the end of it, it was down to less than 2 percent. We have undergone massive sectoral shifts before. There is no guarantee that it will go as well this time, but there’s also no reason to think that it won’t.

We restrict immigration and imports because they pose an immediate risk to specific workers and businesses in the short run. Yet we forget that during periods of far more open immigration and trade, we experienced historically unprecedented levels of growth. Moreover, opening these channels opens us to feedback–from the ideas, new business models, the scientific and technological breakthroughs occurring worldwide and that might occur here if we would allow people to come here.

We should not be focusing our efforts on fighting risk and volatility, but on fighting fragility. We should fight for feedback.

It is only in the face of volatility that we are able to innovate and grow resilient.