Groupishness and Video Game Economics

The world of PC video games is currently ruled by Valve, through their digital game store Steam, which boasts some 40 million users. Part of their success can be credited to their practice of providing heavy discounts on games that are a few months or a year old.

Rival company EA claims that this practice helps intermediaries like Steam while hurting the game developers who have invested a lot of resources into making quality products. David DeMartini, head of Origin, EA’s alternative to Steam, claims that such discounts “cheapen the intellectual property.” He then suggests that the system creates perverse incentives:

One criticism some have labelled at Steam is that its heavy discounts damage video game brands because gamers hold off on buying new releases at launch in anticipation of a future sale.

DeMartini agreed with this position: “What Steam does might be teaching the customer, ‘I might not want it in the first month, but if I look at it in four or five months, I’ll get one of those weekend sales and I’ll buy it at that time at 75 per cent off.’

Valve responded that DeMartini’s claim does not match the facts. Business development chief Jason Holtman first points out that, as game developers themselves, they eat their own dogfood.

We do it with our own games. If we thought having a 75 per cent sale on Portal 2 would cheapen Portal 2, we wouldn’t do it. We know there are all kinds of ways customers consume things, get value, come back, build franchises. We think lots of those things strengthen it.

In order to understand why a discount later might not impact sales today, you need only two simple concepts: time preference, and what I’ve called fanboyism and Jonathan Haidt calls “groupishness”.

The Value of the Now

I am continually impressed by the firm grasp of economic theory that public facing representatives of Valve always seem to have–even before they brought on an actual economist. In this case, Holtman clearly gets time preference.

For instance, if all that were true, nobody would ever pre-purchase a game ever on Steam, ever again. You just wouldn’t. You would in the back of your mind be like, okay, in six months to a year, maybe it’ll be 50 per cent off on a day or a weekend or during one of our seasonal promotions. Probably true. But our pre-orders are bigger than they used to be. Tonnes of people, right? And our day one sales are bigger than they used to be. Our first week, second week, third week, all those are bigger.

When asked to comment on why Steam customers are behaving the opposite of how we would expect them to, given the incentives, Holtman states “the trade-off they’re making is a time trade-off.”

Time preference is the term economists use to describe the phenomenon whereby individuals are willing to pay more for something in the present than they would be at a later date. There are a lot of reasons why something might be more valuable sooner rather than later. There’s always an element of uncertainty–you know they’ll discount any apples the store has left tomorrow, but what if they run out entirely before that? You know Valve will discount a game by a huge amount in a few months, but what if Valve goes out of business before then? What if you lose your hands before then and are unable to play video games ever again?

There are other reasons as well, which are more idiosyncratic. In an era before refrigeration or pasteurization, a bottle of milk worth five dollars today might be worth zero dollars in a week. But it wouldn’t make any sense to wait a week in order to get five dollars off, because it will have spoiled by then.

It is not intuitive on the face of it that video games should have steep discount functions. After all, video games do not spoil, and the uncertainties surrounding their future purchase aren’t much different than a lot of goods with less dramatic discount functions. So what’s going on here?

Gamer Tribalism

Following that argument, nobody would ever go to a first run movie ever again. Even now, as DVDs come out even faster, you’d just be like, heck, I’ll just wait and get the DVD and me and 10 friends will watch it. But people still like to go to theatres because they want to see it first, or they want to consume it first. And that’s even more true with games.

In The Righteous Mind, moral psychologist Jonathan Haidt describes how human beings are inherently group-oriented. A lot of things that we like to think we prefer because of some inherent property we actually like because of how it connects us with other people.

For simplicity’s sake, let’s say that a consumer’s valuation of a given good can be split cleanly into two parts–the value they gain from it as an individual, and its prosocial value.

In video games, the individual value would come from most of the obvious things–how fun it is to play, how challenging it is, how good the art is and how well the story is written.

The prosocial value would come from having it as a topic of conversation with all the other people who are currently playing it or only recently finished it. Anyone who bought any of the Harry Potter books near launch day knows what this is like; everyone wanted to get and read the latest book as soon as it came out so that they could immediately turn around and talk to their friends about it.

In video games there is also the added prosocial value of being able to play with other people at parties or online, and being able to connect with new people in the game.

I would argue that the individual value of a game for practical purposes never changes. To the extent that it is driven down by an increase in substitutes over time, it decreases much more slowly than the prosocial value. Much of the prosocial value is created by the fact that everyone expects everyone else to jump at a game when it is brand new; this doesn’t last long as the group then moves on to the next new thing.

So how much of the value that most consumers get from a game is prosocial, and how much is for the inherent joy of playing the video game itself?

Well, if Valve is to be believed, then the prosocial value makes up as much as 50 or 75 percent of consumer’s valuation of most games. That is an enormous fraction, and I have to wonder how much it is representative of consumer valuation more broadly.

Holtman does seem to indicate that at least some of the value is individual:

Now you can do things like say, I never did own XCOM. Maybe I should buy that for $2 or $5 and pick it up. Or I didn’t get that triple-A game from three years ago, maybe I’ll pick that up on a promotion. And that’s making people happier.

But even here there’s a prosocial element–he states that the ability to get something late for cheap is actually “making them more willing to even buy the first time release.” In other words, if you didn’t get in on Portal 1 when it came out, but had a bunch of friends who were, you can “catch up” now for cheap, and then when Portal 2 comes along you’re more likely to pay the premium to be part of the group.

A lot of people in behavioral economics and moral psychology take their findings to be at odds with standard economic models. But I have always seen them as complementary; as giving us a much better idea of how subjective values are arrived at in the real world. I also share Yanis Varoufakis’ optimism that digital systems like Steam will provide even more insight into human nature than traditional social science experiments or data mining ever could.

In short, it’s a very exciting time to be interested in social science. Also, an exciting time to be a gamer!

Two Arguments in Defense of Unpaid Internships

I’ve heard a lot of arguments about how unpaid internships are evil or a form of exploitation. Recently I heard some of those arguments brought up again, and I decided it was time to stand up for this much maligned position.

An Argument from Principles

Let me lay out a few scenarios for you:

  1. Five people start their own individual blogs, writing several posts a day and making them freely available online.
  2. Those five people decide they want to ditch their individual blogs and all blog together in a group blog.
  3. A small paper offers to have the five of them blog under their banner, but does not pay them to do it.
  4. Instead of 3, the five people ditch their blog and get unpaid internships at the small paper.

I would venture that most people who think there is something wrong with unpaid internships don’t think that there’s anything wrong with 1-3 above. What I would like is for those who believe there is something wrong with number 4 to explain to me what distinguishes it, morally, from 1-3. Because I can’t see it.

I suppose they could argue that there is some distinction between number 4 and the unpaid internships they don’t like. I welcome people to explain to me if that is the case.

So my first argument is simple: if there’s nothing wrong with 1-3, and there’s no moral difference between them and 4, then, in principle, there is nothing wrong with unpaid internships.

An Argument from Consequences

You can’t really talk about consequences without making a bunch of assumptions about what good consequences are. So I’m going to tread carefully here, but I think the assumptions I’m going to make are pretty reasonable and widely shared.

Of course you can’t even talk about consequences without some idea of how the world works. Rather than pretending to know more than I do, let me lay out a few more possible scenarios for you:

  1. If the small paper is forced by law to pay their interns, it won’t keep any of them; our five individuals will not be associated with the paper at all.
  2. If the small paper is forced by law to pay their interns, it will pay one of them a paltry amount and get rid of the other four.
  3. If the small paper is forced by law to pay their interns, it will pay three of them a paltry amount and not the other two.
  4. If the small paper is forced by law to pay their interns, it will pay all of them.

I think some of the people who are against unpaid internships think that the world works in such a way to make number 4 possible. I, on the other hand, tend to believe that the world looks more like 1 or 2 than 4.

This belief of mine is subject to debate, of course. For now, all I’ll say is that in the news business in particular, margins are so low that I have to think we’re much closer to 1 than we are to 2, for that particular industry at the very least.

People take unpaid internships because they gain something from them; whether it’s experience, exposure, or the fact that it’s more prestigious to have a professional publication on your resume than a personal blog. Taking the unpaid internship makes them better off, at least in the long run. Taking a paid internship obviously makes them even better off, but they won’t always have the luxury of that choice.

Readers are also better off when they have more pieces to read that they enjoy. If fewer unpaid interns at the paper means fewer enjoyable pieces for the readers, then getting rid of unpaid interns makes them worse off.

So my second argument, while slightly longer in buildup than the first, is still quite simple: if we live in a world that looks like number 1 or 2 above, and arguably even 3, then getting rid of unpaid internships makes the potential interns as well as potential beneficiaries (in this scenario, readers) worse off.

EDIT: Patrick Delaney came back with a scenario that definitely merits discussion:

https://twitter.com/pxdelaney/status/218856415765344256

And discuss it we did–you can see the whole conversation here.

Fragility and Feedback

We have been fragilizing the economy, our health, political life, education, almost everything… by suppressing randomness and volatility.  Just as  spending a month in bed (preferably with an unabridged version of War and Peace and access to The Sopranos’  entire eighty six episodes ) leads to muscle atrophy, complex systems are weakened, even killed when deprived of stressors.

-Nassim Taleb, Antifragility (draft of prologue)

Such protectionist policies enforce stability at the cost of stifling both resilience and progress. They eliminate the checking process essential to trial-and-error learning, the way by which we identify the “failures” that new forms might correct.

-Virginia Postrel, The Future and Its Enemies

Google’s server architecture is very robust against failures. The quality of the company’s products, and their bottom line, depend on their ability to process enormous amounts of data without interruption and with a low risk of losing any of it. The danger is not hypothetical–companies have been wiped out because some freak accident they were unprepared for destroyed a large fraction of the data they relied on.

Steven Levy’s book on Google makes it clear that they were forced to become robust by their circumstances. Most companies at the time would pay for expensive, high-end servers that had a very low rate of failure. Google did the opposite–they went for inexpensive servers with an extremely high rate of failure. In order to survive, they had to create software for their servers that would preserve their data and keep their workflow from being interrupted even as servers failed left and right.

Google owes their resilient infrastructure to the fragility of their early servers.

In an active quest for resilient infrastructure, Netflix imposed disorder by design upon their servers.

Imagine getting a flat tire. Even if you have a spare tire in your trunk, do you know if it is inflated? Do you have the tools to change it? And, most importantly, do you remember how to do it right? One way to make sure you can deal with a flat tire on the freeway, in the rain, in the middle of the night is to poke a hole in your tire once a week in your driveway on a Sunday afternoon and go through the drill of replacing it. This is expensive and time-consuming in the real world, but can be (almost) free and automated in the cloud.

This was our philosophy when we built Chaos Monkey, a tool that randomly disables our production instances to make sure we can survive this common type of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables — all the while we continue serving our customers without interruption. By running Chaos Monkey in the middle of a business day, in a carefully monitored environment with engineers standing by to address any problems, we can still learn the lessons about the weaknesses of our system, and build automatic recovery mechanisms to deal with them. So next time an instance fails at 3 am on a Sunday, we won’t even notice.

Netflix understands that failure is feedback. Until something goes wrong, they won’t be able to figure out what problems exist in their ability to cope with failure. So rather than resting no their laurels, they put themselves through a constant trial by fire to force themselves to be ready and improve their system. It is no different than getting small doses of a disease or poison in order to build an immunity, or working your body out above and beyond the demands your life makes on it in order to increase its fitness. There are many things in human life where stressors are a prerequisite for improvement–or simple maintenance.

Yet stressors are precisely what we seek to hide from in the world of policy. It is my contention that we are too terrified of short term risk and volatility in this country. Rather than embracing Chaos Monkeys of our own, we simply keep a spare in the back of the car and assume everything will go well if we ever have a flat. The only way to grow stronger, wealthier, and more resilient in the long run is to expose ourselves to a lot more risk and volatility than we have lately shown a willingness to cope with.

Deafening Ourselves

It’s not my purpose to single out the environmental movement, but that does embody a certain mentality about risk that has become so tied up in intellectual knots that it has the net long term effect of making things more risky. It is my thesis that a small number of people have to be willing to shoulder greater risks in order to create changes that eventually reduce risk for civilization as a whole.

Solve for X: Neal Stephenson on getting big stuff done

Stephenson’s point about risk is part of his larger argument that innovation in this country has stagnated, a view he shares with Tyler Cowen and Peter Thiel, among others. Putting his general conclusion to the side, I think the importance he places on at least some subset of the population needing to shoulder more short term risk to reduce overall long term risk is absolutely true.

Instead, we take measures to “manage risk”, deafening ourselves to feedback in the process.

For example, there are risks associated with allowing people to build what they want on the property that they own. They could introduce something that disrupts the neighborhood, either by taking up all the parking, or making noise, or both. So we have zoning laws, building permits, and various business licenses. As a result, real estate supply cannot respond to the massive demand for city living, and prices skyrocket.

Moreover, fewer business experiments are possible when everything has to fit a cookie-cutter business license. In Fairfax County, Virginia, a small theater had to wait nearly a year to open because the county had never had a theater before and wasn’t sure how to license one. That’s an enormous opportunity cost to impose on an operation of that size.

The political process through which license or zoning categories can be changed, and permits are issued, is extremely slow to respond to changes on the ground. While a more open system would hear the demand for denser development as loud as a scream, we’re so busy protecting ourselves from short term disruptions that we have essentially left ourselves deaf to it, and to all the potential beneficial innovations that could have happened.

This is no academic point; the toll of this aversion can be measured in wealth as well as lives. Nothing is more emblematic of our attitudes towards risk than the 12 year, multimillion dollar process that new drugs must go through before the FDA allows them to go to market. This lag has led to countless unnecessary deaths (PDF), not to mention making new drugs enormously more expensive once they finally do reach the market. And the ability of the FDA trials to even truly keep us safe is questionable–the data are not really random, and any effect that might seem small for a sample of thousands might never the less effect a huge number of people once it hits a market of millions.

The bottom line is that there are things that cannot really be known until you take the drug to market. Doctors should have to perform their due diligence of informing patients of the risks and unknowns, but delaying entry by over a decade and piling on enormous costs accomplishes very little. Unless your goal is to drastically reduce the number of new treatments we are capable of discovering per year.

We put off the short term risks and increase our long run costs.

Ditch Stability

The economy, politics, and job market of the future will host many unexpected shocks. In this sense, the world of tomorrow will be more like the Silicon Valley of today: constant change and chaos. So does that mean you should try to avoid those shocks by going into low-volatility careers like health care or teaching? Not necessarily. The way to intelligently manage risk is to make yourself resilient to these shocks by pursuing those opportunities with some volatility baked in. Taleb argues— furthering an argument popularized by ecologists who study resilience— that the less volatile the environment, the more destructive a black swan will be when it comes. Nonvolatile environments give only an illusion of stability

-Reid Hoffman and Ben Casnocha, The Start-up of You

We need more risk and volatility, and we need to give up our fruitless quest to hide from them.

In many ways this quest reflects a lack of historical perspective. We bail out the US automakers again and again because they were once the symbol of American greatness, and we think that once they are gone we will never shine again. Yet we forget that at the turn of the 20th century, 41 percent of our labor force was employed in agriculture, and at the end of it, it was down to less than 2 percent. We have undergone massive sectoral shifts before. There is no guarantee that it will go as well this time, but there’s also no reason to think that it won’t.

We restrict immigration and imports because they pose an immediate risk to specific workers and businesses in the short run. Yet we forget that during periods of far more open immigration and trade, we experienced historically unprecedented levels of growth. Moreover, opening these channels opens us to feedback–from the ideas, new business models, the scientific and technological breakthroughs occurring worldwide and that might occur here if we would allow people to come here.

We should not be focusing our efforts on fighting risk and volatility, but on fighting fragility. We should fight for feedback.

It is only in the face of volatility that we are able to innovate and grow resilient.

Fanboy Politics and Information as Rhetoric

News has to be subsidized because society’s truth-tellers can’t be supported by what their work would fetch on the open market. However much the Journalism as Philanthropy crowd gives off that ‘Eat your peas’ vibe, one thing they have exactly right is that markets supply less reporting than democracies demand. Most people don’t care about the news, and most of the people who do don’t care enough to pay for it, but we need the ones who care to have it, even if they care only a little bit, only some of the time. To create more of something than people will pay for requires subsidy.

-Clay Shirky, Why We Need the New News Environment to be Chaotic

There are few contemporary thinkers that I respect more on matters of media and the Internet than Clay Shirky, but his comment about how much reporting “democracies demand” has bothered me since he wrote it nearly a year ago now. I think the point of view implied in the quoted section above misunderstands what reporting really is, as well as how democracies actually work.

To understand the former, it helps to step away from the hallowed ground of politics and policy and focus instead on reporting in those areas considered more déclassé. The more vulgar subjects of sports, technology, and video games should suffice.

Fanboy Tribalism

One of the most entertaining things about The Verge’s review of the Lumia 900 was not anything editor-in-chief Joshua Topolsky said in the review itself. No, what I enjoyed most was the tidal wave of wrath that descended upon him from the Windows Phone fanboys, who it seemed could not be satisfied by anything less than a proclamation that the phone had a dispensation from God himself to become the greatest device of our time. The post itself has over 2,400 comments at the moment I’m writing this, and for weeks after it went up any small update about Windows Phone on The Verge drew the ire of this contingent.

The fanboy phenomenon is well known among tech journalists, many of whom have been accused of fanboyism themselves. It’s a frequent complain among the Vergecast’s crew that when they give a negative review to an Android phone, they are called Apple fanboys, when they give a negative review to an Windows Phone device, they are called Android fanboys, and so on.

To the diehard brand loyalist, the only way that other people could fail to see their preferred brand exactly the same way that they see it is if those other people have had their judgment compromised by their loyalty to some other brand. So Joshua Topolsky’s failure to understand the glory that is the Lumia 900 stems from the fact that he uses a Galaxy Nexus, an Android device, and his Android fanboyism makes it impossible for him to accurately judge non-Android things.

There came a certain moment when I realized that fanboy tribalism was a symptom of something innate in human nature, and that you saw it in every subject that had news and reporting of some sort. It may have become cliché to compare partisan loyalty with loyalty to a sports team, but the analogy is a valid one. Just as there are brand fanboys, there are sports team fanboys and political party fanboys.

Back in Middle School, I got really wrapped up in this–as a Nintendo fanboy. I had a friend that was a really big Playstation fanboy, and we had the most intense arguments over it. I don’t think I’ve ever had arguments that got as ferocious as those since–not about politics, not about morality, not about anything. We would each bring up the facts we knew that each of us thought should have made it obvious which console was superior and then get infuriated that the other side didn’t immediately concede defeat. I personally always came prepared with the latest talking points from Nintendo’s very own Pravda, Nintendo Power Magazine.

Cognitive Biases and Group Dynamics

Cognitive science has a lot to say about why people act this way. A lot of progress has been made in cataloging the various biases that skew how human beings see the world. Acknowledging that people have a confirmation bias has become quite trendy in certain circles, though it hasn’t really improved the level of discourse. My favorite trope in punditry these days is when one writer talks about how a different writer, or a politician they disagree with, can’t see the obvious truth because of their confirmation biases. Ignoring the fact that the writer himself has the very same bias, as all humans do!

Most of the discussion around cognitive biases centers on how they lead us astray from a more accurate understanding of the world. The more interesting conversation focuses on what these biases emerged to accomplish in the first place, in the evolutionary history of man. The advantages to cementing group formation in hunter gatherer societies is something that has been explored by moral psychologist Jonathan Haidt in his recent book The Righteous Mind. Arnold Kling has an excellent essay where he applies Haidt’s insights to political discourse.

The fact is that even in our modern, cosmopolitan world, we human beings remain a tribal species. Only instead of our tribes being the groups we were born among and cooperate with in order to survive, we have the tribe of Nintendo, the tribe of Apple, and the tribe of Republicans.

When the Apple faithful read technology news, they aren’t looking for information, not really. They’re getting a kind of entertainment, similar to the kind that a Yankee fan gets when reading baseball news. Neither have any decision that they are trying to inform.

Political news is exactly the same. When a registered Democrat reads The Nation, we like to think that there is something more sophisticated going on than our Apple or Yankee fan. But there is not. All of them might as well be my 13-year-old self, reading the latest copy of Nintendo Power. The Democrat was already going to vote for the Democratic candidate; it doesn’t matter what outrageous thing The Nation article claimed that Republicans were doing lately.

Information as Rhetoric

I think that the fear that there might not be enough truth-seekers out there fighting to get voters the salient facts about the rich and power is misplaced for a few reasons. For one thing, in this day and age, it is much easier to make information public than it is to keep it secret. For another, it is rarely truth-seekers that leak such information–it is people who have an ax to grind.

The person that leaked the emails from the Climate Research Unit at the University of East Anglia wasn’t some sort of heroic investigative journalist type with an idealistic notion of transparency. They were undoubtedly someone who didn’t believe in anthropogenic global warming, and wanted to dig up something to discredit those who did. He was a skeptic fanboy, if you like, and he wanted to discredit the climate fanboys.

The people that get information of this sort out to the public are almost always pursuing their own agendas, and attempting to block someone else’s. It’s never about truth-seeking. That doesn’t invalidate what they do, but it does shed a rather different light on getting as much information as “democracies demand”. Democracies don’t demand anything–people have demands, and their demands are often to make the people they disagree with look like idiots and keep them from having any power to act on their beliefs.

To satisfy either their own demands or that of an audience, some people will pursue information to use as a tool of rhetoric.

How Democracies Behave

Let us think of this mathematically for a moment. If information is the input, and democracy is the function, then what is the output?

I’m not going to pretend to have a real answer to that. There’s an entire field, public choice, with scholars dedicating a lot of research and thought to understanding how democracies and public institutions in general behave and why. My father has spent a lot of time thinking about what impact information in particular has on political and social outcomes. I am no expert on any of these subjects, and will not pretend to be.

I will tentatively suggest, however, that people do not vote based on some objective truth about what benefits and harms us as a society. I think people vote based on their interests. That is, narrow material interest–such as whether a particular law is likely to put you out of work of funnel more money your way. But also their ideological or tribal interest–whether it advances a cause you believe in, or a group you consider yourself a part of.

So I don’t really see a reason to insist on subsidizing journalism. All that will accomplish is bending those outlets towards the interests of the ones doing the subsidizing.

From Politics to Porcelain

I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.

John Adams

Think of the market as a set of magnets, pulling people into dense clusters. 80 percent of the population of the US lives in a large city’s metropolitan area, and the top 10 metropolitan areas alone account for a quarter of the population. Cities like Chicago and New York pull people not only from all over the country, but from all over the world. As new people are born and raised in each metropolitan area, the market sorts out where their skills would best be put to use. If it’s somewhere other than where they are, the magnet is engaged to pull them to the new destination.

People are not only pulled towards locations, but towards careers and towards investing in particular skillsets.

The actual mechanism in the place of the figurative magnets is not always as straightforward as offering more money to live in one place than another, or to take one career track over another, but it often is. There are other things, too, such as what professional development courses and training your current employer is willing to pay for you to take.

Taking as my inspiration the John Adams quote at the top of this post, I’m going to argue that the mechanisms the market uses to pull people into particular locations and career paths grow weaker as a population grows wealthier.

Where Have All the STEM Majors Gone?

It’s a disconcerting question for many an analyst of education in the US–why are we graduating so few majors in science, technology, engineering, and math? Alex Tabarrok has the numbers:

Consider computer technology. In 2009 the U.S. graduated 37,994 students with bachelor’s degrees in computer and information science. This is not bad, but we graduated more students with computer science degrees 25 years ago! The story is the same in other technology fields such as chemical engineering, math and statistics. Few fields have changed as much in recent years as microbiology, but in 2009 we graduated just 2,480 students with bachelor’s degrees in microbiology — about the same number as 25 years ago. Who will solve the problem of antibiotic resistance?

If students aren’t studying science, technology, engineering and math, what are they studying?

In 2009 the U.S. graduated 89,140 students in the visual and performing arts, more than in computer science, math and chemical engineering combined and more than double the number of visual and performing arts graduates in 1985.

Anyone can see that the average computer science major is going to make more money than the average visual arts major. The market has engaged the magnets and students aren’t budging.

Education isn’t the only area where the magnets’ strength are waning. According to the New York Times:

The likelihood of 20-somethings moving to another state has dropped well over 40 percent since the 1980s, according to calculations based on Census Bureau data. The stuck-at-home mentality hits college-educated Americans as well as those without high school degrees. According to the Pew Research Center, the proportion of young adults living at home nearly doubled between 1980 and 2008, before the Great Recession hit.

What is going on?

Lower Stakes, Greater Sacrifice

Let’s compare two hypothetical individuals, Tom and Harry. Tom is a 20 year old living 50 years ago, and Harry is a 20 year old today. My general hypothesis is that there are lower stakes for Harry’s decisions than for Tom’s, and that anything which requires a great deal of time and effort requires Harry to give up more than Tom had to.

Let’s consider Harry first. In the immediate term, he has an enormous amount of options for how to spend his time. There’s cable TV, the internet, video games; a whole myriad of stuff. Any field of study that takes up a lot of his time means giving up time doing any of that. As Tabarrok described, the average individual like Harry today chooses majors that not only demand less of his time–thus giving him more time to play video games–but are also enjoyable in themselves, such as the visual arts. After college, if he can’t get a job in anything resembling what he majored in, he can probably live with his parents, where he’ll still be able to enjoy many of the things he was already doing with his free time.

Now consider Tom. Sure, he probably had access to TV, but it wasn’t as pervasive as it is today, and it had like three channels. There were no video games, there was no internet or web; there weren’t even personal computers. If he picked a major that was a dud in the marketplace, maybe he could live with his parents–though he was less likely on average than Harry to be able to, as parents today are much wealthier than parents were fifty years ago–but what would he do there? Fifty years ago you needed more money to be able to get anything approaching the level of options that Harry has available almost by default.

Consider the different stakes: if Tom doesn’t get his career going, he becomes a burden on a family that might not be able to afford it, and he is also probably bored out of his mind and increasingly isolated. Harry, on the other hand, is much more likely to have parents who can afford that burden, and he has much more to do while he lives with them. He can entertain himself, and he can talk to people online; you may argue that the latter isn’t as fulfilling as in-person socializing, but it’s far less lonely than having no one to talk to.

Now consider what each has to give up by pursuing a STEM career: Harry loses out on hours of gaming, movies, TV, browsing the web, talking to people on Twitter, and so on. Tom doesn’t have any of that to lose.

The decline in the portion of men who are employed has been a secular trend for decades.

It has been hidden in the general population employment ratio because of the entering of women into the workforce. Note, however, that though the portion of women who are employed has grown, it still has never reached the level that men have fallen to now, during a soft labor market.

The good news is that we are so wealthy as a society that fewer men need to work than they used to. The bad news is that our wealth is making it harder to convince people to do the difficult work required to make the kinds of material breakthroughs that people in the STEM fields are able to make. It has likewise grown harder to convince them to move away from their friends and family in order to go to the city where their particular skillset might have the greatest impact.

Studying Porcelain

John Adams was right–the mathematical, architectural, and commercial know-how of our ancestors has made it possible for more of us to study poetry and comparative literature. When people are bemoaning the lack of STEM majors and labor mobility, they should remember that the whole point of wealth is to provide us with more options. If someone is more satisfied spending their time reading and writing fiction rather than learning statistics or trigonometry, there is nothing wrong with that. They can increase our overall wealth just as much as a scientist can, if they produce things that are valued by a lot of other people.

On the other hand, it takes STEM skillsets to cure cancer or build self-driving cars, and the per-capita amount of people with those skillsets continues to fall in this country.

Still, I’m not too concerned. The vast majority of the world is nowhere near as wealthy as we are. Engineers, programmers, and chemists are being trained in unprecedented numbers in countries like China and India, and for the most part the whole world will benefit from their advances. As those countries grow wealthier, they’ll experience the same phenomena, but we’ll still be talking about enormous numbers of people with STEM skillsets. And we’re a long ways off from the developing world reaching a level of wealth comparable to the US or Western Europe.

 

Semi-Related Reading: