Stories of Progress and Stagnation

I grew up around computers and have always taken it for granted that we lived in a time of enormous innovation and growth. Within my lifetime, my family has gone from something that looks like this:

To all individually having iPhones, which are enormously more powerful machines, connected to the Internet, and robust platforms for a huge variety of independently developed software. Never mind our various laptops and desktop computers!

It seems to me that from around the point that the term “web 2.0” was coined to the market crash of 2008, the story about the state of things that most people accepted was the one I was inclined to accept by default; that we lived in an era of accelerating progress. That every year would see huge leaps over the previous year, and the year after that would see a leap of similar relative magnitude, and this would go on indefinitely.

There have always been stagnationists, but it’s only in the last couple of years that stagnation stories have started to become fashionable again. Tyler Cowen deserves no small amount of credit, as The Great Stagnation made an enormous splash when it came out in January of last year. While discussions of the recession up until then had been made up almost entirely of diagnosing the financial bubble, post-TGS discussions had to face the possibility that our present predicament might be part of larger, more structural trends. Regardless of whether the book changed anyone’s minds directly, there can be little doubt that it played a huge role in setting the agenda.

The debate that has emerged has fascinated me, both as someone who is deeply interested in our propensity to tell stories, and simply because it is extremely hard to determine who is correct.

The Death of Ambition and the Modern Game of Inches

Tyler Cowen credits PayPal founder and venture capitalist Peter Thiel with inspiring the story behind The Great Stagnation. Recently, Thiel debated Google Chairman Eric Schmidt on the subject of technology and progress. One section of that debate that made the rounds in the economics blogosphere concerned Google’s $50 billion in the bank.

Thiel argued that “if we’re living in an accelerating technological world”, Google should be able to invest that $50 billion in technology in a way that returns their investment many times over. Even if Googlers are claiming that we live in an era of progress, their actions speak to a more pessimistic assessment.

Thiel believes that we live in a deterministic world in which progress is made by making big bets on enormous projects. Part of the reason we no longer pursue some ambitions is that we have all become indeterminists; our resources are all tied up in hedging against uncertainty. Even though the tech sector is characterized by progress so stable and relentless that we refer to several specific trends as “laws”, the players are, if anything, more indeterminist in their worldview than average.

Google’s low-yielding $50 billion is the ultimate symbol of this. Google made nearly $10 billion in profits in 2011, and almost all of that came from search, their core product. Thiel’s argument is that if Google believed that we lived in a time of accelerating technological progress, where $10 billion a year breakthroughs were just lying around waiting to be invented, they would be spending every penny they had on attempting to make those breakthroughs happen.

More important than the cultural change, however, is the fact that public policy has systematically outlawed ambitious projects of any sort. From the debate with Schmidt:

The why questions always get immediately ideological. I’m Libertarian, I think it’s because the government has outlawed technology. We’re not allowed to develop new drugs with the FDA charging $1.3 billion per new drug. You’re not allowed to fly supersonic jets, because they’re too noisy. You’re not allowed to build nuclear power plants, say nothing of fusion, or thorium, or any of these other new technologies that might really work.

So, I think we’ve basically outlawed everything having to do with the world of stuff, and the only thing you’re allowed to do is in the world of bits. And that’s why we’ve had a lot of progress in computers and finance. Those were the two areas where there was enormous innovation in the last 40 years. It looks like finance is in the process of getting outlawed. So, the only thing left at this point will be computers and if you’re a computer that’s good. And that’s the perspective Google takes.

Further down, responding to criticism of the financial sector, he adds:

I disagree with the premise behind the question that there’s some sort of tradeoff between finance and other areas of innovation. I think it’s easy to be anti-finance at this point in our society, and I think the reality is we have an economy that got very lopsided towards finance, but it’s fundamentally because people weren’t able to do other things.

So, if you ask why did all the rocket scientists go to work on Wall Street in the ’90s to create new financial products, and you say well they were paid too much in finance and we have to beat up on the finance industry, that seems like that’s the wrong side to focus on. I think the answer was, no, they couldn’t get jobs as rocket scientists anymore because you weren’t able to build rockets, or supersonic airplanes, or anything like that. And so you have to ‑‑ it’s like why did brilliant people in the Soviet Union become grand master chess players? It’s not that there’s something deeply wrong with chess, it’s they weren’t allowed to do anything else.

In short, we have grown risk averse in both our culture and in our policy.

Science fiction writer Neal Stephenson is firmly in the stagnationist camp, and he definitely believes it is all about risk aversion. He has written:

 Innovation can’t happen without accepting the risk that it might fail. The vast and radical innovations of the mid-20th century took place in a world that, in retrospect, looks insanely dangerous and unstable. Possible outcomes that the modern mind identifies as serious risks might not have been taken seriously — supposing they were noticed at all — by people habituated to the Depression, the World Wars, and the Cold War, in times when seat belts, antibiotics, and many vaccines did not exist.

In Stephenson and Thiel’s story, true innovation is risky, bold, and visible, while what passes for innovation in modern times is peanuts by comparison. Stephenson pointed to the ongoing competition to build the world’s tallest building as an emblematic example of the problem. These days the tallest building in the world is only a few inches taller than the previous record-holder, and only holds the record for a few months as another slightly taller building is always being constructed in near parallel.

What Stephenson wants is for us to build a structure several orders of magnitude larger than anything that’s ever been built before; a structure that will hold the record for decades before it becomes technologically possible or financially conceivable to surpass it. To Stephenson as well as Thiel, that is what innovation should look like.

The stagnationist has no problem with the ground game, but is frustrated that there doesn’t seem to have been any passing game in forty years. Meanwhile everyone is going around presenting the incremental gains as though they were big breakthroughs. Neither Stephenson, Thiel, nor indeed Cowen, are impressed. You talk about all the wonders we’ve seen since the mass adoption of the Internet, but have they really moved the needle? Just think about penicillin, anesthetics, the automobile and the airplane, not to mention all the spillover innovations that came from putting a man on the moon!

At Founder’s Fund, the venture capital firm at which Thiel is a partner, they have a saying: “we wanted flying cars, and instead we got 140 characters.”

The Value of the Unseen

There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen.

-Frederic Bastiat, What Is Seen and What Is Not Seen

I see a lot of truth in pieces of the arguments made by Thiel, Stephenson, and Cowen, but am uncertain whether I buy into all of it. My natural inclination has always been to dismiss stagnationist stories, and Stephenson’s fixation with big, visible things made me all the more skeptical. The stories I have grown close to over the years frequently point out how what seems to be plain truth is often, when you take a step back, a lot less clear and sometimes completely wrong. You think, for instance, that making something as simple as a pencil is the easiest possible task, but it turns out that there’s this huge process behind it in which no individual has enough knowledge to assemble a single pencil.

Take GDP as an example. It’s a nice point of reference, but if you start assuming that GDP–or even GDP per capita–is synonymous with national wealth, you run into some serious problems. GDP is essentially just aggregate spending. When you buy an iPhone for $199.99, you are adding $199.99 to this year’s GDP. It’s a great proxy for national income but it has many recognized problems. In what is perhaps a dated and vaguely sexist sounding example, Paul Samuelson came up with the following scenario:

Take Samuelson’s example of the man marrying his maid. Samuelson’s point is that the new bride continues doing the housework without being paid. But that would not mean that the work suddenly had no market value. So, in this case, GDP actually understates the market value of all final goods and services because this particular service is no longer exchanged on the market.

The valued activity–the housework–is still being done, but because there isn’t any spending involved, it isn’t measured in GDP.

Bryan Caplan has pointed out repeatedly that the consumption done on digital devices and on the Internet is hugely mismeasured by metrics like GDP. In one post, he points out one implication of all the various network products seeing success in the market today:

In the real world, network goods visibly improve all the time. But suppose they didn’t. Suppose the Facebook of today used the same source code as it did five years ago, but still attracted new users at the same rate as it did in the real world. Many economists would be tempted to call this “stagnation,” but they’d be wrong. Even if Facebook’s source code stayed the same, the mere fact that more people are using the product causes it to be better. Why? Because the point of the product is to amusingly interact with your friends. The more friends who use it, the more amusing it is.

The upshot: Economists (and people generally) underestimate true economic growth for all expanding network products. When you measure the quality of network products, you can’t simply look at them in isolation. You have to measure what you can do with them.

There are many dimensions in which Caplan argues that our measurement biases are worse than ever, but our standard of living is actually better than ever.

Looking at my own daily life, a huge amount of my consumption is simply not counted in GDP. I consume an enormous amount of content without paying anything for it. There’s also the reverse benefit–I can write lengthy posts like this one and put them in a public place, whereas before the Internet only the lucky few who managed to get published could do anything roughly equivalent.

If we are a groupish species, and I believe we are, then the ability to connect with others and increase the number of our shared experiences is a huge benefit. Clay Shirky’s excellent book, Here Comes Everybody, discusses how modern technology has reduced the transaction costs associated with group action, the benefits of which we are only beginning to understand. In his followup, Cognitive Surplus, he described how central hubs like Wikipedia are able to aggregate a few minutes of effort from enough sources to result in one enormously valuable resource.

Even after The Great Stagnation, many defend the story that progress is accelerating. In Race Against the Machine, Erik Brynjolfsson and Andrew McAfee argue that technological innovation has been going at a breakneck pace for decades, and we’re only now entering the second half of the chessboard. Yet their vision of progress has a caveat–we are currently at a moment where technology is replacing humans in performing certain tasks faster than entrepreneurs are coming up with new jobs that humans are better at than machines. Arnold Kling said it best:

 The paradox is this. A job seeker is looking for something for a well-defined job. But the trend seems to be that if a job can be defined, it can be automated or outsourced.

Still, overall well-being is going way up as machines become much, much more efficient at providing us with things that we value for rock bottom prices. So on net, we’re seeing tremendous progress.

Radical Uncertainty

Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.

-Nassim Nicholas Taleb, The Black Swan

If our culture has embraced indeterminacy, or more accurately uncertainty, as Thiel thinks we have, then Taleb has taken this story farther than anyone. Whereas Thiel will argue:

 Several people have successfully started multiple companies that became worth more than a billion dollars. Steve Jobs did Next Computer, Pixar, and arguably both the original Apple Computer as well as the modern Apple. Jack Dorsey founded Twitter and Square. Elon Musk did PayPal, Tesla, SpaceX, and SolarCity. The counter-narrative is that these examples are just examples of one big success; the apparently distinct successes are all just linked together. But it seems very odd to argue that Jobs, Dorsey, or Musk just got lucky.

Taleb has no compunction with arguing that they got lucky–or, at the very least, that we are incapable of determining the difference between pure luck and its opposite. In Fooled By Randomness, he conjures up a scenario in which an eccentric rich person will pay $10 million to whomever wins a game of Russian Roulette. Someone might get lucky and win, but if they keep playing, the odds will eventually catch up with them. However, if the pool of players is large enough, you will get a handful of consistent winners even after many rounds of playing the game.

In addition, in time, if the roulette-betting fool keeps playing the game, the bad histories will tend to catch up with him. Thus, if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim chance of his surviving until his fiftieth birthday–but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).

What you always miss out on when citing examples of people like Steve Jobs whose success seems so improbable at the individual level is that, with a big enough “cemetery” of people making similar attempts but failing, the probability of having a few people like him increases. Moreover, after the first success there is some preferential attachment, so to speak–while most startups that get funding do not succeed, the vast majority of startups don’t get any funding. Jack Dorsey’s first success increased the odds that even a stupid sounding idea would get funding the next time around, which increased his odds of succeeding. Now, there are a lot of people in a similar situation who did not then go on to have another success, but again, if the cemetery is big enough, you will end up with a few Jack Dorseys.

Again, the point is not to argue that everything is pure luck. The point is that the role that randomness plays in anything is unknowable. We have stories that persuade us to a greater or lesser extent, but in the end there is enormous uncertainty. Take the very debate over whether we are in a stagnation or a period of accelerating progress. The debate is very robust; with a great deal of evidence brought to bear on both sides of the argument. And everyone can think of alternative stories to fit the data–when I brought up Thiel’s conclusions about Google’s large cash horde, people immediately came up with alternative interpretations.

In Taleb’s world, progress and ill fortune are not smooth trendlines in either direction; they are lumpy. You get big, sudden breakthroughs, and huge, unexpected catastrophes (think of the turkey). So it can seem for a very long time like we’re going in either direction, and then one dramatic event today can have more of an impact on our well being than the past thirty years combined. In a way, the relatively short period since the onset of the Industrial Revolution is a big, dramatic event in the timescale of human history, and there is no guarantee that it will last. The progress could stop tomorrow, or the gains could be completely reversed by some countervailing dramatic event–say, nuclear war or a particularly virulent disease. Or, conversely, we could be at the foothill of a positive breakthrough of such a magnitude as to make the past 200 years look like nothing. There is simply no way to say.

F. A. Hayek was also a proponent of radical uncertainty; he believed that the only possible path to progress was through rote trial and error. It is possible to do the big things that the stagnationists want to see, but you’d better be prepared to see some colossal failures along the way. This begins to look more like Stephenson’s story about the role of risk, and there is certainly some overlap here.

But Thiel’s deterministic worldview is well outside of that overlap. Contra Thiel, the economist Frank Knight believed that the world is filled with irreducible and unquantifiable uncertainty. What’s more, Knight believed that progress was made and profit was found by entrepreneurs who deliberately sought out niches that had high degrees of uncertainty.

In this story of uncertainty and lumpy progress, Google’s $50 billion makes a lot of success. In a direct response to Thiel, Arnold Kling pointed out that under high uncertainty there is a high option value to waiting to invest.

Picture two possible scenarios–one in which Google develops the next big breakthrough in-house, another in which someone else develops it and Google acquires them. Google is clearly pursuing a lot of the former–famously, they are developing wearable computing and they have already clocked hundreds of thousands of miles on their fleet of automated cars. But their tens of billions of dollars in the bank suggests that they believe the big breakthroughs are going to come from outside of Google, rather than through their internal process.

This is frustrating to a hard determinist like Thiel who thinks we should be able to see what’s coming down the road and simply invest that $50 billion in it. But ultimately this is no different than any other make or buy decision that firms face; and how that split is made is a question that economists have analyzed since Coase. The fact that Google is sitting on so much money, from the perspective of this particular story, does not imply that they think we’re in the middle of a stagnation. Rather, it implies that they believe the market is more likely to supply the next $10 billion a year breakthrough than their own internal processes. That could speak to the weakness of their internal processes, or it could simply mean that the market is that much better at developing big breakthroughs than a single corporation could ever be.

Alex Tabarrok asked who will make the future if Google is just waiting for it. The answer provided by this story is that many players, in many firms, scattered across the market and across time will make the future, and many will do so in the hopes of a big payday from Google.

Cycles of Control and Resistance

This is the last story that I will examine here, and it comes from my former classmate Eli Dourado.

To really understand Eli’s story, you have to understand his larger framework. Despite the fact that economically-saavy libertarians believe very strongly in the power of incentives, most still seem to harbor the notion that the practical path forward for policy reform is through persuasion. And there is a story to be told in which this strategy has seen some success, with the neoliberal revolution for example.

In Eli’s framework, the incentives against governments adopting libertarian policies in a broad way are simply too powerful to overcome in the long run. Think about the big spam botnets. Botnets build up over time and become a low cost way to send people spam emails. After a while, one or two botnets will account for the vast majority of all spam. Security groups will get together and work to get one of the top ones taken out, and it will result in a big short term payoff–a recent takedown resulted in an estimated 50% drop in spam.

But the cost of building up a botnet is low enough, and the payoff for spam with an infinitesimal success rate is so high, that it doesn’t take long before the volume of spam is right back to where it was before the takedown. In Eli’s world, most good policies are like botnet takedowns–short term gains but a wash in the long run.

With that in mind, here’s is Eli’s more specific story about innovation:

First we need to differentiate between two kinds of innovation and think about their effects. The first kind of innovation is geared toward brute maximization of production. It is typically centralized and makes use of economies of scale. Examples might include an assembly line factory or a big, coal-fired power plant. Because these innovations tend to be centralized, they introduce points of control. The capital is typically fixed and therefore easy to tax and regulate. It’s well known in the development literature that it’s really hard for governments to control rural peasants who live off the grid. Once they move to the cities and plug into centralized services, it is easier to require them to send their children to school, for instance. Because these innovations introduce points of control, I will call them technologies of control.

On the other hand, not all innovations are about brute maximization of production. Some are about producing things that we already know how to produce in ways that have ancillary benefits. An important ancillary benefit is evading control. Examples of these innovations include 3D printers and solar power. The evasion of control that is possible with 3D printers is the subject of Cory Doctorow’s short story Printcrime. And portable solar power cells can make people harder to control by supplying electricity without the need to register an address, have a bank account, stay put, and so on. These are obvious examples, but control can be evaded through more subtle innovations as well. I will call innovations that circumvent points of control that can be used by governments or monopolies to exploit, tax, or regulate technologies of resistance.

Eli explicitly splits the difference between The Great Stagnation and Race Against the Machine. He posits that the Industrial Revolution was all about the technologies of control–people clustered into dense urban populations, and were employed in mass numbers by factories that produced on a scale that was unprecedented in human history. We saw massive improvements in the standard of living of industrializing nations in the blink of an eye.

But all the concentation and the mobility-reducing high capital costs made the sources of our new wealth easy targets for governments to come in and take a bigger and bigger cut. Beyond straight taxation, interest group pressures also created an incentive to exercise specific forms of control through government regulation, reducing the effectiveness of the technologies of control.

Still, the productive capacity of these technologies was such that we coasted all the way into the 1970’s before the deadweight of government regulation and taxation slowed us down. Since then, our resources have shifted to developing technologies of resistance, which is why Brynjolfsson and McAfee see accelerating innovation. It is accelerating, but it’s accelerating in a very specific area because of how difficult it is to control that particular area.

We do see welfare gains from innovation in the technologies of resistance, but they are not nearly as big as we could get with the technologies of control, were they not so bogged down with regulation. Resources are spent on creating robustness against control that would have otherwise been spent on maximizing pure economic growth, in the absence of efficiency-reducing regulation.

In this story, ideology, persuasion, and democracy will not help us. Every time the median voter swings more libertarian, we see the technologies of control begin to give us bigger gains again. But, like the botnet takedowns, it is only a matter of time before the regulations creep back in again. And we almost never see anything comparable to a botnet takedown in terms of orders of magnitudes–we see some small reforms that may be bigger or smaller in impact, but we’re talking 1% or 2% improvements, not 50% or 75%.

The only way to move to a better long run path is to change something fundamentally structural. Eli imagines an extreme version of such a change in his post on the utopia of infinite elasticity.

It’s tempting to think that the bond market is powerful because of corruption, but that is at most a proximate source of power. The real source of power is elasticity. The supply of financial capital is highly elastic; it moves around the globe in milliseconds. Try to tax it and the incidence of the tax will go elsewhere; burden it with regulations and it will flea to a more hospitable climate.

Imagine a world in which all factors of production were as mobile and elastic as financial capital. If labor and physical capital could flea instantaneously and at low cost from bad policies, there would be little danger from either the predatory or incompetent state. In short, it would be a libertarian utopia.

As with any ideal, Eli does not believe that such a world is possible to get to, but he does think that we can move closer to it. Maybe, rather than simply developing specific technologies of resistance, we can build a whole infrastructure of resistance. Maybe mass adoption of 3D printing and wireless mesh networks helps move us to a much more elastic world.

Otherwise, we will just be stuck in this race against coercion where we eek out progress in inches rather than big leaps. We may occasionally widen the gap, or set back coercion with the reform movement of the moment, but we’ll never see the enormous gains of the early Industrial Revolution on a regular basis again. In this story, you can take everything that Cato, the Hoover Foundation, and even Milton Friedman accomplished, and throw them in the garbage, and you won’t see much of a difference in the long run.

Instead of investing in lobbying, we should be investing in an infrastructure of resistance.

I have to admit that I find this to be the most fascinating story of all.

Groupishness and Video Game Economics

The world of PC video games is currently ruled by Valve, through their digital game store Steam, which boasts some 40 million users. Part of their success can be credited to their practice of providing heavy discounts on games that are a few months or a year old.

Rival company EA claims that this practice helps intermediaries like Steam while hurting the game developers who have invested a lot of resources into making quality products. David DeMartini, head of Origin, EA’s alternative to Steam, claims that such discounts “cheapen the intellectual property.” He then suggests that the system creates perverse incentives:

One criticism some have labelled at Steam is that its heavy discounts damage video game brands because gamers hold off on buying new releases at launch in anticipation of a future sale.

DeMartini agreed with this position: “What Steam does might be teaching the customer, ‘I might not want it in the first month, but if I look at it in four or five months, I’ll get one of those weekend sales and I’ll buy it at that time at 75 per cent off.’

Valve responded that DeMartini’s claim does not match the facts. Business development chief Jason Holtman first points out that, as game developers themselves, they eat their own dogfood.

We do it with our own games. If we thought having a 75 per cent sale on Portal 2 would cheapen Portal 2, we wouldn’t do it. We know there are all kinds of ways customers consume things, get value, come back, build franchises. We think lots of those things strengthen it.

In order to understand why a discount later might not impact sales today, you need only two simple concepts: time preference, and what I’ve called fanboyism and Jonathan Haidt calls “groupishness”.

The Value of the Now

I am continually impressed by the firm grasp of economic theory that public facing representatives of Valve always seem to have–even before they brought on an actual economist. In this case, Holtman clearly gets time preference.

For instance, if all that were true, nobody would ever pre-purchase a game ever on Steam, ever again. You just wouldn’t. You would in the back of your mind be like, okay, in six months to a year, maybe it’ll be 50 per cent off on a day or a weekend or during one of our seasonal promotions. Probably true. But our pre-orders are bigger than they used to be. Tonnes of people, right? And our day one sales are bigger than they used to be. Our first week, second week, third week, all those are bigger.

When asked to comment on why Steam customers are behaving the opposite of how we would expect them to, given the incentives, Holtman states “the trade-off they’re making is a time trade-off.”

Time preference is the term economists use to describe the phenomenon whereby individuals are willing to pay more for something in the present than they would be at a later date. There are a lot of reasons why something might be more valuable sooner rather than later. There’s always an element of uncertainty–you know they’ll discount any apples the store has left tomorrow, but what if they run out entirely before that? You know Valve will discount a game by a huge amount in a few months, but what if Valve goes out of business before then? What if you lose your hands before then and are unable to play video games ever again?

There are other reasons as well, which are more idiosyncratic. In an era before refrigeration or pasteurization, a bottle of milk worth five dollars today might be worth zero dollars in a week. But it wouldn’t make any sense to wait a week in order to get five dollars off, because it will have spoiled by then.

It is not intuitive on the face of it that video games should have steep discount functions. After all, video games do not spoil, and the uncertainties surrounding their future purchase aren’t much different than a lot of goods with less dramatic discount functions. So what’s going on here?

Gamer Tribalism

Following that argument, nobody would ever go to a first run movie ever again. Even now, as DVDs come out even faster, you’d just be like, heck, I’ll just wait and get the DVD and me and 10 friends will watch it. But people still like to go to theatres because they want to see it first, or they want to consume it first. And that’s even more true with games.

In The Righteous Mind, moral psychologist Jonathan Haidt describes how human beings are inherently group-oriented. A lot of things that we like to think we prefer because of some inherent property we actually like because of how it connects us with other people.

For simplicity’s sake, let’s say that a consumer’s valuation of a given good can be split cleanly into two parts–the value they gain from it as an individual, and its prosocial value.

In video games, the individual value would come from most of the obvious things–how fun it is to play, how challenging it is, how good the art is and how well the story is written.

The prosocial value would come from having it as a topic of conversation with all the other people who are currently playing it or only recently finished it. Anyone who bought any of the Harry Potter books near launch day knows what this is like; everyone wanted to get and read the latest book as soon as it came out so that they could immediately turn around and talk to their friends about it.

In video games there is also the added prosocial value of being able to play with other people at parties or online, and being able to connect with new people in the game.

I would argue that the individual value of a game for practical purposes never changes. To the extent that it is driven down by an increase in substitutes over time, it decreases much more slowly than the prosocial value. Much of the prosocial value is created by the fact that everyone expects everyone else to jump at a game when it is brand new; this doesn’t last long as the group then moves on to the next new thing.

So how much of the value that most consumers get from a game is prosocial, and how much is for the inherent joy of playing the video game itself?

Well, if Valve is to be believed, then the prosocial value makes up as much as 50 or 75 percent of consumer’s valuation of most games. That is an enormous fraction, and I have to wonder how much it is representative of consumer valuation more broadly.

Holtman does seem to indicate that at least some of the value is individual:

Now you can do things like say, I never did own XCOM. Maybe I should buy that for $2 or $5 and pick it up. Or I didn’t get that triple-A game from three years ago, maybe I’ll pick that up on a promotion. And that’s making people happier.

But even here there’s a prosocial element–he states that the ability to get something late for cheap is actually “making them more willing to even buy the first time release.” In other words, if you didn’t get in on Portal 1 when it came out, but had a bunch of friends who were, you can “catch up” now for cheap, and then when Portal 2 comes along you’re more likely to pay the premium to be part of the group.

A lot of people in behavioral economics and moral psychology take their findings to be at odds with standard economic models. But I have always seen them as complementary; as giving us a much better idea of how subjective values are arrived at in the real world. I also share Yanis Varoufakis’ optimism that digital systems like Steam will provide even more insight into human nature than traditional social science experiments or data mining ever could.

In short, it’s a very exciting time to be interested in social science. Also, an exciting time to be a gamer!

Fragility and Feedback

We have been fragilizing the economy, our health, political life, education, almost everything… by suppressing randomness and volatility.  Just as  spending a month in bed (preferably with an unabridged version of War and Peace and access to The Sopranos’  entire eighty six episodes ) leads to muscle atrophy, complex systems are weakened, even killed when deprived of stressors.

-Nassim Taleb, Antifragility (draft of prologue)

Such protectionist policies enforce stability at the cost of stifling both resilience and progress. They eliminate the checking process essential to trial-and-error learning, the way by which we identify the “failures” that new forms might correct.

-Virginia Postrel, The Future and Its Enemies

Google’s server architecture is very robust against failures. The quality of the company’s products, and their bottom line, depend on their ability to process enormous amounts of data without interruption and with a low risk of losing any of it. The danger is not hypothetical–companies have been wiped out because some freak accident they were unprepared for destroyed a large fraction of the data they relied on.

Steven Levy’s book on Google makes it clear that they were forced to become robust by their circumstances. Most companies at the time would pay for expensive, high-end servers that had a very low rate of failure. Google did the opposite–they went for inexpensive servers with an extremely high rate of failure. In order to survive, they had to create software for their servers that would preserve their data and keep their workflow from being interrupted even as servers failed left and right.

Google owes their resilient infrastructure to the fragility of their early servers.

In an active quest for resilient infrastructure, Netflix imposed disorder by design upon their servers.

Imagine getting a flat tire. Even if you have a spare tire in your trunk, do you know if it is inflated? Do you have the tools to change it? And, most importantly, do you remember how to do it right? One way to make sure you can deal with a flat tire on the freeway, in the rain, in the middle of the night is to poke a hole in your tire once a week in your driveway on a Sunday afternoon and go through the drill of replacing it. This is expensive and time-consuming in the real world, but can be (almost) free and automated in the cloud.

This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables — all the while we continue serving our customers without interruption. By running Chaos Monkey in the middle of a business day, in a carefully monitored environment with engineers standing by to address any problems, we can still learn the lessons about the weaknesses of our system, and build automatic recovery mechanisms to deal with them. So next time an instance fails at 3 am on a Sunday, we won’t even notice.

Netflix understands that failure is feedback. Until something goes wrong, they won’t be able to figure out what problems exist in their ability to cope with failure. So rather than resting no their laurels, they put themselves through a constant trial by fire to force themselves to be ready and improve their system. It is no different than getting small doses of a disease or poison in order to build an immunity, or working your body out above and beyond the demands your life makes on it in order to increase its fitness. There are many things in human life where stressors are a prerequisite for improvement–or simple maintenance.

Yet stressors are precisely what we seek to hide from in the world of policy. It is my contention that we are too terrified of short term risk and volatility in this country. Rather than embracing Chaos Monkeys of our own, we simply keep a spare in the back of the car and assume everything will go well if we ever have a flat. The only way to grow stronger, wealthier, and more resilient in the long run is to expose ourselves to a lot more risk and volatility than we have lately shown a willingness to cope with.

Deafening Ourselves

It’s not my purpose to single out the environmental movement, but that does embody a certain mentality about risk that has become so tied up in intellectual knots that it has the net long term effect of making things more risky. It is my thesis that a small number of people have to be willing to shoulder greater risks in order to create changes that eventually reduce risk for civilization as a whole.

Solve for X: Neal Stephenson on getting big stuff done

Stephenson’s point about risk is part of his larger argument that innovation in this country has stagnated, a view he shares with Tyler Cowen and Peter Thiel, among others. Putting his general conclusion to the side, I think the importance he places on at least some subset of the population needing to shoulder more short term risk to reduce overall long term risk is absolutely true.

Instead, we take measures to “manage risk”, deafening ourselves to feedback in the process.

For example, there are risks associated with allowing people to build what they want on the property that they own. They could introduce something that disrupts the neighborhood, either by taking up all the parking, or making noise, or both. So we have zoning laws, building permits, and various business licenses. As a result, real estate supply cannot respond to the massive demand for city living, and prices skyrocket.

Moreover, fewer business experiments are possible when everything has to fit a cookie-cutter business license. In Fairfax County, Virginia, a small theater had to wait nearly a year to open because the county had never had a theater before and wasn’t sure how to license one. That’s an enormous opportunity cost to impose on an operation of that size.

The political process through which license or zoning categories can be changed, and permits are issued, is extremely slow to respond to changes on the ground. While a more open system would hear the demand for denser development as loud as a scream, we’re so busy protecting ourselves from short term disruptions that we have essentially left ourselves deaf to it, and to all the potential beneficial innovations that could have happened.

This is no academic point; the toll of this aversion can be measured in wealth as well as lives. Nothing is more emblematic of our attitudes towards risk than the 12 year, multimillion dollar process that new drugs must go through before the FDA allows them to go to market. This lag has led to countless unnecessary deaths (PDF), not to mention making new drugs enormously more expensive once they finally do reach the market. And the ability of the FDA trials to even truly keep us safe is questionable–the data are not really random, and any effect that might seem small for a sample of thousands might never the less effect a huge number of people once it hits a market of millions.

The bottom line is that there are things that cannot really be known until you take the drug to market. Doctors should have to perform their due diligence of informing patients of the risks and unknowns, but delaying entry by over a decade and piling on enormous costs accomplishes very little. Unless your goal is to drastically reduce the number of new treatments we are capable of discovering per year.

We put off the short term risks and increase our long run costs.

Ditch Stability

The economy, politics, and job market of the future will host many unexpected shocks. In this sense, the world of tomorrow will be more like the Silicon Valley of today: constant change and chaos. So does that mean you should try to avoid those shocks by going into low-volatility careers like health care or teaching? Not necessarily. The way to intelligently manage risk is to make yourself resilient to these shocks by pursuing those opportunities with some volatility baked in. Taleb argues— furthering an argument popularized by ecologists who study resilience— that the less volatile the environment, the more destructive a black swan will be when it comes. Nonvolatile environments give only an illusion of stability

-Reid Hoffman and Ben Casnocha, The Start-up of You

We need more risk and volatility, and we need to give up our fruitless quest to hide from them.

In many ways this quest reflects a lack of historical perspective. We bail out the US automakers again and again because they were once the symbol of American greatness, and we think that once they are gone we will never shine again. Yet we forget that at the turn of the 20th century, 41 percent of our labor force was employed in agriculture, and at the end of it, it was down to less than 2 percent. We have undergone massive sectoral shifts before. There is no guarantee that it will go as well this time, but there’s also no reason to think that it won’t.

We restrict immigration and imports because they pose an immediate risk to specific workers and businesses in the short run. Yet we forget that during periods of far more open immigration and trade, we experienced historically unprecedented levels of growth. Moreover, opening these channels opens us to feedback–from the ideas, new business models, the scientific and technological breakthroughs occurring worldwide and that might occur here if we would allow people to come here.

We should not be focusing our efforts on fighting risk and volatility, but on fighting fragility. We should fight for feedback.

It is only in the face of volatility that we are able to innovate and grow resilient.

Fanboy Politics and Information as Rhetoric

News has to be subsidized because society’s truth-tellers can’t be supported by what their work would fetch on the open market. However much the Journalism as Philanthropy crowd gives off that ‘Eat your peas’ vibe, one thing they have exactly right is that markets supply less reporting than democracies demand. Most people don’t care about the news, and most of the people who do don’t care enough to pay for it, but we need the ones who care to have it, even if they care only a little bit, only some of the time. To create more of something than people will pay for requires subsidy.

-Clay Shirky, Why We Need the New News Environment to be Chaotic

There are few contemporary thinkers that I respect more on matters of media and the Internet than Clay Shirky, but his comment about how much reporting “democracies demand” has bothered me since he wrote it nearly a year ago now. I think the point of view implied in the quoted section above misunderstands what reporting really is, as well as how democracies actually work.

To understand the former, it helps to step away from the hallowed ground of politics and policy and focus instead on reporting in those areas considered more déclassé. The more vulgar subjects of sports, technology, and video games should suffice.

Fanboy Tribalism

One of the most entertaining things about The Verge’s review of the Lumia 900 was not anything editor-in-chief Joshua Topolsky said in the review itself. No, what I enjoyed most was the tidal wave of wrath that descended upon him from the Windows Phone fanboys, who it seemed could not be satisfied by anything less than a proclamation that the phone had a dispensation from God himself to become the greatest device of our time. The post itself has over 2,400 comments at the moment I’m writing this, and for weeks after it went up any small update about Windows Phone on The Verge drew the ire of this contingent.

The fanboy phenomenon is well known among tech journalists, many of whom have been accused of fanboyism themselves. It’s a frequent complain among the Vergecast’s crew that when they give a negative review to an Android phone, they are called Apple fanboys, when they give a negative review to an Windows Phone device, they are called Android fanboys, and so on.

To the diehard brand loyalist, the only way that other people could fail to see their preferred brand exactly the same way that they see it is if those other people have had their judgment compromised by their loyalty to some other brand. So Joshua Topolsky’s failure to understand the glory that is the Lumia 900 stems from the fact that he uses a Galaxy Nexus, an Android device, and his Android fanboyism makes it impossible for him to accurately judge non-Android things.

There came a certain moment when I realized that fanboy tribalism was a symptom of something innate in human nature, and that you saw it in every subject that had news and reporting of some sort. It may have become cliché to compare partisan loyalty with loyalty to a sports team, but the analogy is a valid one. Just as there are brand fanboys, there are sports team fanboys and political party fanboys.

Back in Middle School, I got really wrapped up in this–as a Nintendo fanboy. I had a friend that was a really big Playstation fanboy, and we had the most intense arguments over it. I don’t think I’ve ever had arguments that got as ferocious as those since–not about politics, not about morality, not about anything. We would each bring up the facts we knew that each of us thought should have made it obvious which console was superior and then get infuriated that the other side didn’t immediately concede defeat. I personally always came prepared with the latest talking points from Nintendo’s very own Pravda, Nintendo Power Magazine.

Cognitive Biases and Group Dynamics

Cognitive science has a lot to say about why people act this way. A lot of progress has been made in cataloging the various biases that skew how human beings see the world. Acknowledging that people have a confirmation bias has become quite trendy in certain circles, though it hasn’t really improved the level of discourse. My favorite trope in punditry these days is when one writer talks about how a different writer, or a politician they disagree with, can’t see the obvious truth because of their confirmation biases. Ignoring the fact that the writer himself has the very same bias, as all humans do!

Most of the discussion around cognitive biases centers on how they lead us astray from a more accurate understanding of the world. The more interesting conversation focuses on what these biases emerged to accomplish in the first place, in the evolutionary history of man. The advantages to cementing group formation in hunter gatherer societies is something that has been explored by moral psychologist Jonathan Haidt in his recent book The Righteous Mind. Arnold Kling has an excellent essay where he applies Haidt’s insights to political discourse.

The fact is that even in our modern, cosmopolitan world, we human beings remain a tribal species. Only instead of our tribes being the groups we were born among and cooperate with in order to survive, we have the tribe of Nintendo, the tribe of Apple, and the tribe of Republicans.

When the Apple faithful read technology news, they aren’t looking for information, not really. They’re getting a kind of entertainment, similar to the kind that a Yankee fan gets when reading baseball news. Neither have any decision that they are trying to inform.

Political news is exactly the same. When a registered Democrat reads The Nation, we like to think that there is something more sophisticated going on than our Apple or Yankee fan. But there is not. All of them might as well be my 13-year-old self, reading the latest copy of Nintendo Power. The Democrat was already going to vote for the Democratic candidate; it doesn’t matter what outrageous thing The Nation article claimed that Republicans were doing lately.

Information as Rhetoric

I think that the fear that there might not be enough truth-seekers out there fighting to get voters the salient facts about the rich and power is misplaced for a few reasons. For one thing, in this day and age, it is much easier to make information public than it is to keep it secret. For another, it is rarely truth-seekers that leak such information–it is people who have an ax to grind.

The person that leaked the emails from the Climate Research Unit at the University of East Anglia wasn’t some sort of heroic investigative journalist type with an idealistic notion of transparency. They were undoubtedly someone who didn’t believe in anthropogenic global warming, and wanted to dig up something to discredit those who did. He was a skeptic fanboy, if you like, and he wanted to discredit the climate fanboys.

The people that get information of this sort out to the public are almost always pursuing their own agendas, and attempting to block someone else’s. It’s never about truth-seeking. That doesn’t invalidate what they do, but it does shed a rather different light on getting as much information as “democracies demand”. Democracies don’t demand anything–people have demands, and their demands are often to make the people they disagree with look like idiots and keep them from having any power to act on their beliefs.

To satisfy either their own demands or that of an audience, some people will pursue information to use as a tool of rhetoric.

How Democracies Behave

Let us think of this mathematically for a moment. If information is the input, and democracy is the function, then what is the output?

I’m not going to pretend to have a real answer to that. There’s an entire field, public choice, with scholars dedicating a lot of research and thought to understanding how democracies and public institutions in general behave and why. My father has spent a lot of time thinking about what impact information in particular has on political and social outcomes. I am no expert on any of these subjects, and will not pretend to be.

I will tentatively suggest, however, that people do not vote based on some objective truth about what benefits and harms us as a society. I think people vote based on their interests. That is, narrow material interest–such as whether a particular law is likely to put you out of work of funnel more money your way. But also their ideological or tribal interest–whether it advances a cause you believe in, or a group you consider yourself a part of.

So I don’t really see a reason to insist on subsidizing journalism. All that will accomplish is bending those outlets towards the interests of the ones doing the subsidizing.

From Politics to Porcelain

I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.

John Adams

Think of the market as a set of magnets, pulling people into dense clusters. 80 percent of the population of the US lives in a large city’s metropolitan area, and the top 10 metropolitan areas alone account for a quarter of the population. Cities like Chicago and New York pull people not only from all over the country, but from all over the world. As new people are born and raised in each metropolitan area, the market sorts out where their skills would best be put to use. If it’s somewhere other than where they are, the magnet is engaged to pull them to the new destination.

People are not only pulled towards locations, but towards careers and towards investing in particular skillsets.

The actual mechanism in the place of the figurative magnets is not always as straightforward as offering more money to live in one place than another, or to take one career track over another, but it often is. There are other things, too, such as what professional development courses and training your current employer is willing to pay for you to take.

Taking as my inspiration the John Adams quote at the top of this post, I’m going to argue that the mechanisms the market uses to pull people into particular locations and career paths grow weaker as a population grows wealthier.

Where Have All the STEM Majors Gone?

It’s a disconcerting question for many an analyst of education in the US–why are we graduating so few majors in science, technology, engineering, and math? Alex Tabarrok has the numbers:

Consider computer technology. In 2009 the U.S. graduated 37,994 students with bachelor’s degrees in computer and information science. This is not bad, but we graduated more students with computer science degrees 25 years ago! The story is the same in other technology fields such as chemical engineering, math and statistics. Few fields have changed as much in recent years as microbiology, but in 2009 we graduated just 2,480 students with bachelor’s degrees in microbiology — about the same number as 25 years ago. Who will solve the problem of antibiotic resistance?

If students aren’t studying science, technology, engineering and math, what are they studying?

In 2009 the U.S. graduated 89,140 students in the visual and performing arts, more than in computer science, math and chemical engineering combined and more than double the number of visual and performing arts graduates in 1985.

Anyone can see that the average computer science major is going to make more money than the average visual arts major. The market has engaged the magnets and students aren’t budging.

Education isn’t the only area where the magnets’ strength are waning. According to the New York Times:

The likelihood of 20-somethings moving to another state has dropped well over 40 percent since the 1980s, according to calculations based on Census Bureau data. The stuck-at-home mentality hits college-educated Americans as well as those without high school degrees. According to the Pew Research Center, the proportion of young adults living at home nearly doubled between 1980 and 2008, before the Great Recession hit.

What is going on?

Lower Stakes, Greater Sacrifice

Let’s compare two hypothetical individuals, Tom and Harry. Tom is a 20 year old living 50 years ago, and Harry is a 20 year old today. My general hypothesis is that there are lower stakes for Harry’s decisions than for Tom’s, and that anything which requires a great deal of time and effort requires Harry to give up more than Tom had to.

Let’s consider Harry first. In the immediate term, he has an enormous amount of options for how to spend his time. There’s cable TV, the internet, video games; a whole myriad of stuff. Any field of study that takes up a lot of his time means giving up time doing any of that. As Tabarrok described, the average individual like Harry today chooses majors that not only demand less of his time–thus giving him more time to play video games–but are also enjoyable in themselves, such as the visual arts. After college, if he can’t get a job in anything resembling what he majored in, he can probably live with his parents, where he’ll still be able to enjoy many of the things he was already doing with his free time.

Now consider Tom. Sure, he probably had access to TV, but it wasn’t as pervasive as it is today, and it had like three channels. There were no video games, there was no internet or web; there weren’t even personal computers. If he picked a major that was a dud in the marketplace, maybe he could live with his parents–though he was less likely on average than Harry to be able to, as parents today are much wealthier than parents were fifty years ago–but what would he do there? Fifty years ago you needed more money to be able to get anything approaching the level of options that Harry has available almost by default.

Consider the different stakes: if Tom doesn’t get his career going, he becomes a burden on a family that might not be able to afford it, and he is also probably bored out of his mind and increasingly isolated. Harry, on the other hand, is much more likely to have parents who can afford that burden, and he has much more to do while he lives with them. He can entertain himself, and he can talk to people online; you may argue that the latter isn’t as fulfilling as in-person socializing, but it’s far less lonely than having no one to talk to.

Now consider what each has to give up by pursuing a STEM career: Harry loses out on hours of gaming, movies, TV, browsing the web, talking to people on Twitter, and so on. Tom doesn’t have any of that to lose.

The decline in the portion of men who are employed has been a secular trend for decades.

It has been hidden in the general population employment ratio because of the entering of women into the workforce. Note, however, that though the portion of women who are employed has grown, it still has never reached the level that men have fallen to now, during a soft labor market.

The good news is that we are so wealthy as a society that fewer men need to work than they used to. The bad news is that our wealth is making it harder to convince people to do the difficult work required to make the kinds of material breakthroughs that people in the STEM fields are able to make. It has likewise grown harder to convince them to move away from their friends and family in order to go to the city where their particular skillset might have the greatest impact.

Studying Porcelain

John Adams was right–the mathematical, architectural, and commercial know-how of our ancestors has made it possible for more of us to study poetry and comparative literature. When people are bemoaning the lack of STEM majors and labor mobility, they should remember that the whole point of wealth is to provide us with more options. If someone is more satisfied spending their time reading and writing fiction rather than learning statistics or trigonometry, there is nothing wrong with that. They can increase our overall wealth just as much as a scientist can, if they produce things that are valued by a lot of other people.

On the other hand, it takes STEM skillsets to cure cancer or build self-driving cars, and the per-capita amount of people with those skillsets continues to fall in this country.

Still, I’m not too concerned. The vast majority of the world is nowhere near as wealthy as we are. Engineers, programmers, and chemists are being trained in unprecedented numbers in countries like China and India, and for the most part the whole world will benefit from their advances. As those countries grow wealthier, they’ll experience the same phenomena, but we’ll still be talking about enormous numbers of people with STEM skillsets. And we’re a long ways off from the developing world reaching a level of wealth comparable to the US or Western Europe.

 

Semi-Related Reading: