The Diffusion of Innovations

One kind of uncertainty is generated by an innovation, defined as an idea, practice, or object that is perceived as new by an individual or another unit of adoption. An innovation presents an individual or an organization with a new alternative or alternatives, as well as new means of solving problems. However, the probability that the new idea is superior to previous practice is not initially known with certainty by individual problem solvers. Thus, individuals are motivated to seek further information about the innovation in order to cope with the uncertainty that it creates.

-Everett Rogers, Diffusion of Innovations, 5th Edition.

No one can pretend to a comprehensive understanding of human social systems until they have read the latest edition of Everett Rogers’ Diffusion of Innovations, or familiarized themselves with the literature it surveys by some other means. This is not to say that you will achieve perfect knowledge of such systems upon completing the book, nor would Rogers have made such a claim. What Rogers provides is a sense of how much has been accomplished in the young field he helped to create, and how many unanswered questions still remain.

The Basics

Image from UNODC

Rogers is relentless in his categorization and definition of concepts in the diffusion model, but some basic notions can be spelled out without resorting to his level of detail.

The contribution of the diffusion literature that has itself diffused widely beyond the field is the concept of the early adopter. Rogers lays out several categories of adopter, including a stage before the earlier adopter, which he calls the “innovator”. Counterintuitively, the innovator is not actually the one who comes up with the innovation, but is simply the very first to adopt it. Then comes the early adopters, followed by the early majority, the late majority, and the laggards.

Studies conducted in different disciplines across a broad range of subjects over a course of decades have consistently found that adoption, plotted over time, looks like the S-shaped curve pictured above. The initial adoption period, during which only the innovators and the early adopters are adopting, begins relatively slowly. The middle period, when the early majority and then the late majority adopt, occurs extremely quickly. Finally, the laggards are the last to the party and even after everyone else has adopted their adoption is quite slow.

The part of the diffusion curve from about 10 percent adoption to 20 percent adoption is the heart of the diffusion process. After that point, it is often impossible to stop the further diffusion of a new idea, even if one wished to do so.

Heterogeneity and homogeneity are crucial components of the social system in which innovations spread. Innovators are standalone individuals who are so different from the rest of the people in the social system that their adoption does nothing to encourage other individuals to adopt the innovation. I used to think that Robert Scoble was the quintessential early adopter, but by Rogers’ terminology I think he is actually an innovator. He tries out absolutely everything, often years before anyone else does. The way he uses the innovations he adopts is often very different from how much later adopters will end up using it. I think few people actually adopt something because the Robert Scobles of the world did it first.

On the other hand, early adopters are different enough that they are more likely to adopt an innovation than the majority, but similar enough to the majority that they are much more likely to follow suit eventually.

Early adopters are a more integrated part of the local social system than are innovators. Whereas innovators are cosmopolites, early adopters are localites.

In spite of what many early 20th century communications thinkers believed, mass media has an insignificant effect on our behavior compared to our peers. Once adoption reaches the “majority”–which Rogers claims accounts for about 68 percent of a population–the adoption rate skyrockets, because the early and late majorities are comprised of a large number of highly homogeneous individuals who are looking to one another for cues about whether the innovation is worth the effort of adopting.

The laggards are the last group to adopt, and their rate of adoption remains quite slow relative to the big upsurge in the middle period of diffusion. One interesting thing I had not realized before reading Diffusion of Innovations is that there usually exists a big socioeconomic gap between early adopters and laggards. This makes a certain sense–the downside risk is smaller for a relatively wealthier person taking on the costs of adopting an untested innovation than it is for a relatively poorer one. The difference between the two categories isn’t always one of wealth, though, but the gap usually does exist in some form of social status.

There is much, much more to it than this basic picture. An enormous amount of work has been done studying the various communication channels through which innovations spread, and the social systems that provide the institutions and context the adopters interpret the innovations from, and just about every aspect of the innovation-generation, diffusion, and implementation processes.

The Book and Its Author

Diffusion of Innovations is an interesting book with an interesting history. I don’t know if it’s really accurate to call what I read the 5th edition, as I get the sense that it is so radically different from the first as to be almost an entirely distinct book. The first edition was published in 1962, with the express intention of unifying the research efforts conducted in disparate academic disciplines and providing a common theoretical framework. The 5th edition was published in 2003 and is just as concerned with criticizing and exposing the flaws in what Rogers calls “the classical diffusion model”–the one he himself was instrumental in formalizing!–as it is with introducing the basics to newcomers.

Rare is the scholar who introduces a theoretical model that becomes the foundation for an entire line of academic research. Rarer still is the scholar who is able to see various critiques and contradictions to his model, accept them, and work to improve the model! Rogers was exceptionally open minded and well read. On the latter count, every theoretical concept introduced in the book is immediately followed up with specific case studies to demonstrate what they mean in practice. He was there at the very beginning of diffusion research and has lived to see it evolve.

My only regret where the book is concerned is that it was not written more recently–at the time it was published, Rogers’ most recent source of information on Internet adoption showed that there were a little over 500 million computers connected to it in the world. Today, over 800 million people are active on Facebook alone! From his Wikipedia page I see that Rogers passed away in 2004–this is truly tragic, as I would have loved to read what he thought about what has transpired on the web in the 9 or so years since the 5th edition was published.

Any scholar of human nature who isn’t familiar with this literature owes it to themselves to read this book.

Parting Ways With 2011

2011 was a year of turmoil and misfortune. Nonbeliever though I am, I am tempted to say that it was a cursed year.

Some would point to the movements, which began in Tunisia and became then Arab Spring, and then spread around the world, and say that 2011 was in fact a year of hope. Unless these movements result in lasting, positive change, however, we will only be able to say that 2011 was a year of great upheaval and unrest.

I wish I could say that I will remember it as the year that I got engaged, but then, it was also the year my fiancée went to the emergency room three times for two complete freak accidents. I wish I could merely remember it as the year we spent five romantic days in San Francisco, culminating in the wedding of our two good friends. However, I cannot remember that without thinking also of the call I received the very last night there informing me that my aunt had passed away at the age of 59. What would have been her 60th birthday came and went earlier this month.

2010 concluded on a joyous note, as family gathered from far and wide to celebrate my grandmother’s 90th birthday. 2011 began with a tragedy, as her brother, 5 years her senior, passed away in early January. It was not unexpected, yet in my short life I’ve already learned that every passing is a surprise, no matter how anticipated. He was a great man, a loving father and uncle, and we were all thankful that he had made through nearly all of his 95 years of life without any mental deterioration whatsoever until near the very end.

I have since heard from friends and family that they lost loved ones and suffered several misfortunes in the first half of the year, but between my granduncle’s passing and the phone call I received in San Francisco, 2011 showed the potential for being a wonderful year.

Knowing me too well to trust my judgment on such matters, Catherine accompanied me to pick out a ring. So we made an appointment, and one Monday in April we took an early morning Bolt Bus up to New York City, where we made our way to a tiny little place in the Diamond District. We picked a beautiful ring, and celebrated with lunch at Le Bernadin. We would keep the trip a secret until I had formally proposed, meaning I had to come up with an excuse when my parents left me a worried voicemail because I hadn’t been flooding Twitter and Facebook the way I do on a typical day.

[blackbirdpie id=”60120396216795136″]

[blackbirdpie id=”60154260209418240″]

When the ring arrived a few weeks later and I proposed, we had a romantic dinner together at Cork, one of our favorite restaurants in the neighborhood. We then went through the age-old process of deciding who needed to be told first, followed by addressing the more modern question of “who do we want to make sure knows about this before we put it on Facebook?”

Once that was taken care of, announcing it on Facebook and Twitter was really very fun. Facebook automatically does this thing where it pulls up pictures that have both of us in it, it was very nice. Of course, the announcement ended up getting slightly overshadowed by a minor event you may have heard about that happened later that evening.

[blackbirdpie id=”64854034711977984″]

We had been going back and forth on whether to go out to San Francisco for our friends’ wedding because of how big a commitment such a trip would be, but at a certain point we decided that we just did not want to miss it. So we turned the trip into our vacation. We used Airbnb to find an extremely affordable place to stay for five nights. We reached out to our friends who had lived in San Francisco before, and they reached out to their friends who were still there–and the response was overwhelming. On June 1st, we flew out to San Francisco armed with more than enough information about the restaurant and cultural scene there to ensure we would have a good time.

It was one of the best vacations I have ever had, if for no other reason than I shared it with her. It was also the first time that I was really able to appreciate the food culture of a place I was visiting; before I met Catherine I was an extremely picky eater, and although I had been to Paris and Madrid and elsewhere, I had not even attempted to enjoy the local cuisine. Catherine began broadening my tastes early in our relationship, and by the time we went to San Francisco I was trying everything and anything. It’s a beautiful city and we had a fantastic time. The wedding was wonderful and and a lot of fun, as well.

We were sitting in the little room we had rented in a flat in the Mission District late in the afternoon of June 5th, the day after the wedding and our last day in San Francisco. We were trying to decide what to do for dinner; at that stage in the trip neither of us were feeling very adventurous so we were thinking of what we could do that was close by. In the middle of this discussion I received a call from my mother. I could tell something was wrong, and I was afraid that something had happened to one of my grandparents. Then she told me it was my Aunt Mari, that she was gone.

I don’t really remember the initial explanation she gave me, and I wasn’t much good at conveying the details to Catherine. I was, frankly, in shock. How could this have happened? Catherine and I spent our last evening in San Francisco in a quiet, mostly empty wine bar, not far from where we were staying, trying not to think too much about the news which seemed bigger than my mind could begin to absorb.

2011 will always be the year that my Aunt Mari died.

The year did not go well after that, either. I don’t feel comfortable talking about all of it here out of respect for the privacy of the particular individuals, but several of our loved ones have struggled with health problems–physical, mental, and emotional. One of my best friends in the world had an anxiety attack on a scale that she had never experienced before. Several members of both of our families have ended up in hospitals. Catherine herself was there three times–once because she was hit by someone on a bicycle, and the other times after she accidentally splashed boiling water on herself. There is much about the year after our return from San Francisco that was truly wretched.

However, I am not so blind as to miss how lucky we really are, through all of this.

Much of our pain is the pain of seeing the people we love suffer, yet this is an unavoidable part of having so many wonderful people in our lives, from family to friends. The tragedies that have happened this year have shown me how truly lucky I am to know such truly good people. I am so proud to be joining Catherine’s family; the way they came together to support one another this year was very humbling. From friends and family alike, I saw people commit acts of love and kindness, big and small, for those who were hurting.

There is no one who I knew in January that I think less of now in December as a result of what transpired in between. 2011 was a troubling, awful year, but I wouldn’t have chosen to navigate through it with any other group of people than the ones I had.

I hope the journey we take through 2012 is a better, brighter one, but either way I am eternally grateful for the people I will be taking it with.

A Tale of Two Audiences

Disclaimer: the following is simply a hypothesis, a story if you will. I submit it to you with no pretension of either originality or authority.

When content companies look at their incoming traffic, they divide them into two general categories–referral traffic and direct traffic.

They’re both fairly self-explanatory. Referral traffic comes from people who found you from somewhere else. The biggest category of this in general is search traffic, but links from Facebook or Twitter, or someone’s blog are also typical sources.

Direct traffic, as the name implies, is made up of people who go directly to the site. Often this means that they are loyal followers of your content.

A loyal user is much more valuable than someone who finds an article of yours from Google, gives it a look, and then never comes back again. They’re more valuable in the quantifiable sense that they see your ads a lot more than that drive-by search user, but they’re more valuable in less obvious ways as well.

The vast majority of the traffic to the big content sites is referral traffic. While the individual user may be less valuable than the individual in direct traffic, the total amount of revenue from referral traffic is much larger. This is the reason that the internet is rife with SEO spam and headlines that are more likely to get clicked if tweeted. Search traffic alone is a gigantic pie, and empires have been built on it alone, with practically zero direct traffic.

However, having a loyal user base that comes to your site regularly is extra valuable precisely because it is likely to get you more referral traffic. Consider: loyal users are more likely to share a link to content on your site from Twitter, Facebook, Tumblr, Redditt, or a good old-fashioned blog. This is exactly the kind of behavior that generated referral traffic–both directly from people clicking those links, indirectly from those people possibly sharing the links themselves, and yet more indirectly if they link from their blogs and thus improve your Google ranking.

Of course, even within your loyal base there is variation in how valuable a particular user is. Guy who has read the Washington Post for forty years and has just moved online is less valuable to the Post than someone like Mark Frauenfelder, who might link to one of their articles on Boing Boing, improving their Google ranking further and sending some of his traffic their way. But it’s still useful to think broadly about direct traffic vs. referral traffic.

The Porous Wall

Back in March, the New York Times launched something like a paywall. There are numerous particulars and exceptions, but the long and short of it is that someone like me, who only ever visits the Times when someone I know links to it, faces no wall of any sort. Meanwhile, someone who has loyally visited the Times every day for years will have to pay if they want to see more than 20 articles a month.

At the time, Seamus McCauley of Virtual Economics pointed out the perverse incentives this created: the Times were literally punishing loyalty without doing anything to lure in anyone else. Basic economic intuition dictates that this should mostly result in reducing the amount of direct traffic that they receive.

The Times did spend a reported $40 million researching this plan, and while I’ll never be the first person to claim business acumen on the part of the Times, you have to think they did something with all that money. As usual, Eli had a theory.

[blackbirdpie id=”52741320770469891″]

Imagine, for a moment, that all of the Times’ direct traffic was composed of individuals who were perfectly price inelastic in their consumption of articles on nytimes.com. That is, they would pay any price to be able to keep doing it. Assume, also, that all of the Times’ referral traffic was perfectly price elastic–if you raised the price by just one cent, they would abandon the site entirely. The most profitable path for the Times in this scenario would be, if possible, to charge direct traffic an infinite amount to view the site, while simultaneously charging the referral traffic nothing so they keep making the site money by viewing ads.

The reality is a less extreme dichotomy–though I wouldn’t be surprised if a significant fraction of the Times’ referral traffic did vanish if they tried to charge them a penny. Still, the direct traffic, while undoubtedly less elastic than the referral traffic, is unlikely to be perfectly inelastic.

Getting a good idea of just how inelastic would be a very valuable piece of information for the Times to have, and I think Eli is right that that is exactly what they spent the $40 million on–that, and devising the right strategy for effectively price discriminating between the two groups.

It’s too soon to tell if the strategy will work for the Times, or if it’s a viable strategy for any content company to pursue.

Come One, Come All

2011 also saw the birth of a new technology website, The Verge.

The Verge began after a group of editors from Engadget left, in the spirit of the traitorous eight, because they believed they could do it better than AOL would allow them to. During their time at Engadget, they had developed a following through the listeners of their podcasts and their presence on social media–Twitter in particular. They also were fairly active in the comment sections of their own posts.

In order to bridge the gap between the launch of the new site and the departure from the old one, they set up a makeshift, interim blog called This Is My Next, and continued their weekly podcast. This kept them connected with the community they had built around them and allowed them to keep building it while they were also working on launching The Verge.

There are a lot of things that I really like about The Verge. First, they bundled forums into the site and gave people posting in them practically the same tools that they use for the posts on the main site. The writers themselves participate in the forums, and any user’s post that they find particularly exceptional they will highlight on the main site and in The Verge’s various social network presences.

Second, they do a lot of long-form, image and video rich, niche pieces that may take time to get through but which just have a kind of polish which is rare among web-native publications.

When I told a friend of mine about how much I loved these pieces, he very astutely asked “but it costs like eleven times as much to make as a cookie-cutter blog post, and do you really think it generates eleven times more revenue?”

This question bothered me, because in a straightforward sense he seems to be right. Say that Paul Miller could have written eleven shorter posts instead of this enormous culture piece on StarCraft. There is no way that The Verge made eleven times as much in ad revenue from that post as they would from the eleven shorter posts he could have written.

But posts like that one attract a certain kind of audience. I may not read every long feature piece that The Verge does, but I like that they are there and I read many of them. The fact that they do those pieces is part of the reason that I made them my regular tech news read rather than Engadget or Gizmodo.

In short, the clear strategy being pursued by The Verge is to reward their most loyal audience, even if it doesn’t directly result in more revenue than trying to game search engines. There isn’t always tension between the two strategies–one of the sites features is called story streams, pages like this which make it easy to see how a particular story has unfolded so far. It also is one more page that could potentially show up in Google search results.

Still, having followed the group very closely at Engadget first, and now at the Verge, it seems clear to me that the core mission is to build up the loyal visitors. If a feature piece costs eleven times as much as a shorter one, but makes a loyal reader out of someone who indirectly brings 100 visitors to the site over time by sharing links on Twitter and Facebook, was the feature piece profitable?

The Verge is even younger than the Times’ paywall, so time has yet to tell if their approach is sustainable.

Farming Referral Traffic

At the onset of 2011, a big rumble was building in the tech community about Google’s search quality. Many claimed that it had become filled with spam. Google was not deaf to these criticisms, and argued that spam was actually at an all time low. The problem wasn’t spam, they argued, but “thin content”–and what have come to be called content farms.

The logic of the content farm is that with enough volume, enough of your pages will make it high enough in search results to get you enough referral traffic to make a pretty penny. In short, a content farm focuses its efforts on acquiring referral traffic and foregoes any real effort to build up direct traffic.

This in itself isn’t a problem, of course. If you have 500,000 of the greatest articles ever written by mankind, then you, Google, and search users are all better off if you rank highly in relevant search results. And many companies that took a beating when Google began targeting thin and low quality content for downranking have cried that they were unfairly lumped in with the rest just for having a lot of pages.

The content farm controversy aside, there is a clear and obvious place for a site that has an audience made up almost entirely of referral traffic–reference sites. Wikipedia, for instance, at one point received somewhere in the neighborhood of 90 percent of its visits from referral traffic. Though it is probably the biggest receiver of search traffic, it is not unusual in this regard–there is a whole industry of people whose livelihoods are made or broken by the various tweaks of Google’s algorithm.

The Path Forward

As I said, much remains uncertain. Five or ten years from now we’ll look back and be able to say which experiments proved to be the models for striking the balance between these audiences. For my part, I can only hope that it looks more like what The Verge is doing than what the New York Times is trying.

Create Value, Not Jobs

Treat all economic questions from the viewpoint of the consumer for the interests of the consumer are the interests of the human race.

-Frederic Bastiat

Public discourse on matters of the economy is and has always been dominated by the idea that the road to prosperity is to create jobs. In a moment of high unemployment, the “create jobs” rhetoric becomes that much more prevalent. We get a “Jobs Bill“; opponents of Obama’s reform call it “job destroying“; after a brief period of discussing deficits and debt national news outlets turned right back to talking about jobs.

Reading Tyler Cowen’s The Great Stagnation and Erik Brynjolfsson and Andrew McAfee’s Race Against the Machine, I was surprised to see that they considered lack of jobs to be one of the key problems of our times. Surprised, because I have become accustomed to economists arguing that jobs are not what matter, wealth is. Upon closer examination, however, I think that what they are arguing is consistent with that–they are putting it into the rhetoric of jobs because that is accessible to most people, but what they are saying is different from what a politician means when he calls for job creation.

A Very Human Propensity

This division of labour, from which so many advantages are derived, is not originally the effect of any human wisdom, which foresees and intends that general opulence to which it gives occasion. It is the necessary, though very slow and gradual, consequence of a certain propensity in human nature which has in view no such extensive utility; the propensity to truck, barter, and exchange one thing for another.

-Adam Smith, Inquiry into the Nature and Causes of the Wealth of Nations

In a barter economy, things are straightforward. I can only get something I want from you if I give you something that you want. I have to provide you with something of value.

This is still how the economy works on a fundamental level; money is just an intermediary between barter exchanges. Instead of giving you something you want, I give my employer or my customer something that they want. They give me money, which I can give to you so that you can turn around and get something you want. The person that you give it to accepts it because they can turn around and exchange it for something that they want.

Sensing a theme? Wealth is merely the ability to get things that we want. Since most of us are not independently wealthy, we have to work to create things that other people want in order to get what we want. The most common way to do this since the dawn of the industrial revolution has been to work for someone who needs human labor to accomplish some end–an end that is valued by consumers.

But it isn’t the only way–Henry Ford wasn’t an “employee”; he was an entrepreneur who developed more efficient ways to provide consumers with something of value at a lower cost. Moreover, he figured out how a little value could be added by large numbers of workers in the process.

There are also freelancers; people who are not employees nor employers, but work for specific clients at specific times. Rather than providing a valued service steadily over time, they do it on a case by case basis, and depending on the industry can face lean seasons and busy seasons.

Value, Not Work

The point is, our goal should never be to “create jobs”. Our goal should be to enable people to contribute something valued by other people. The value is the point, not the work. If someone finds a way to provide value to hundreds of millions of people and it requires no more effort from them than batting their eyelashes, that would be a win.

So why are economists like Cowen and Brynjolfsson talking about jobs? The stories they are telling, while far from the same, have a common theme which I interpret as follows: the forward march of technology has made it very difficult for people who have traditionally had low-skill or even middle-skill occupations to contribute value. As Arnold Kling succinctly put it:

The paradox is this. A job seeker is looking for something for a well-defined job. But the trend seems to be that if a job can be defined, it can be automated or outsourced.

He goes on to say that people who are capable of working in “less structured environments” are going to get a premium at this moment–in other words, people like entrepreneurs and freelancers.

His story, which he used to call the Recalculation but lately has referred to as Patterns of Sustainable Specialization and Trade (PSST), goes like this:

  1. One industry overwhelmingly dominates the economy (first agriculture, then manufacturing).
  2. Rapid technological change enormously increases the productivity of that industry while providing a lot of untapped potential in other areas.
  3. Since many fewer workers are needed now, there’s a period of massive unemployment before entrepreneurs figure out how to make the most valuable use of all the surplus labor.
  4. A new pattern of sustainable specialization and trade emerges that is optimal to the current state of technology.
In Kling and Brynjolfsson’s story, we’re at step 3. Technology has made it easy to replace workers with machines in old industries, but it is not yet obvious how those workers can contribute value in the young industries. Solving that problem is non-trivial.

An Important Distinction

This is not a matter of semantics. If you think the problem is a lack of jobs, all sorts of dangerous “solutions” may come to mind. Anything from having the government hiring en masse to do make-work, valueless jobs, to setting high tariffs and immigration restrictions so that domestic companies and labor do not have any foreign competition.

Frederic Bastiat was a 19th century French economic journalist who spilled a lot of ink attacking such foolish notions. You have to think about wealth from the perspective of the consumer. Yes, there would be more “work” to do if we cut off trade and immigration, but it would also impoverish just about everyone as the cost of getting anything would skyrocket. Getting a job is not an end unto itself; the whole point is to trade our labor for other things that we want. Getting a job at the cost of not being able to afford anything is an absurd proposition.

As for make-work jobs, I would rather the government send the poor a check to do what they want with than to force them to “play real job”. At least then they would have the time to think about how they can contribute something of real value!

Economists like Cowen and Kling get it. Farhad Manjoo does not. He wrote:

Most economists aren’t taking these worries very seriously. The idea that computers might significantly disrupt human labor markets—and, thus, further weaken the global economy—so far remains on the fringes.

Certainly technology can and is disrupting human labor markets–but that isn’t going to “further weaken the global economy”. It is going to increase our productivity, make it easier to provide consumers value for cheaper. It will make it hard for people replaced by machines to figure out how they can create additional value, for a time.

But we need to get our priorities straight; what we want to do is help people create value. Unless giving someone a job will enable them to create more value than it costs, the existence of that job is counterproductive.

How Could the Results Not Be Art?

Back when I was maybe a Sophomore or Junior in High School, I took the money I had saved from my summer job and used it to buy StarCraft. My relationship with StarCraft was to be a tragic one–I loved playing it but was absolutely mediocre at playing against other human beings.

Many, many years later I acquired the sequel and started playing it. One thing that struck me, looking at it now, is how rich the universe is that Blizzard created for the game. The world-building and history-building for the two alien species–as well as for mankind’s own progression–is excellent science fiction in its own right. Moreover, the settings, units, and cutscenes are visually rich.

When Roger Ebert declared that video games could never be art back in the spring of 2010, the accusation was not a new one. His post managed to light a fire under the debate at the time, and he did eventually backpedal somewhat. I hesitate to address the subject at all because, like arguments over whether modern art is really art, or any argument around the definition of a word, there isn’t actually a right answer. But a conversation on the subject with some friends recently got my mind working, and I felt an urge, as usual, to think it out through writing.

I’m not interested in what towering intellectuals or the Supreme Court have to say on the matter; and I’m certainly not interested in trying to make some kind of tautological, game=art by definition argument. How we experience art is a very personal thing, so it is from my personal experiences with gaming that I will proceed.

Status


First of all, I think it’s crucial to clarify what it is we’re actually arguing over. I think Alex hits this on the head:

This is really a debate over status, with gamers wanting to elevate the status of their activity, and by proxy, themselves. Opponents fear that calling games art will lower the status of art.

This is what it’s all about. It isn’t because people are really passionate about their particular definitions of the word “art”. It’s because we have culturally come to use the word art to talk about something higher, something refined.

If you read the original Ebert piece, he is quite clear about this–saying that “No one in or out of the field has ever been able to cite a game worthy of comparison with the great poets, filmmakers, novelists and poets.” He even admits that there are many legitimate definitions of art that are in common use which would encompass video games, but still he excludes them. Why? Because they have not demonstrated a propensity for greatness.

A couple of months later Ebert basically said he shouldn’t have stuck his nose in an area he knew next to nothing about–but his perspective isn’t limited to the inexperienced. Hideo Kojima, the mastermind behind the popular Metal Gear Solid series, also believes that video games are not art.

His argument is that video games are not art because they are entertainment. This seems to beg the question–is art never entertaining? That would miss the point, however–Hideo’s drawing a line between the lower form of expression–“entertainment”–and the higher one–“art”. He is saying that video games do not reach the status of art. When Ebert says it as an outsider it seems insulting, when an insider like Hideo says it we could consider it a kind of humility, but either way I have to disagree.

The Art of Making Video Games

Webcomic artist Der-shing Helmer once wrote:

Comics are, to me, the greatest art form because there are so many elements to master. A person who creates a full comic must at the very least; be a great planner, a great storyteller, have an eye for layout, know how to pencil, ink and color in a way that tells their own story, and of course, they have to know how to write. Becoming more than simply “proficient” at all of these components can easily take a lifetime to learn.

Of course, at big comic publishing houses like DC or Marvel, they have people who specialize in the components–the writer may be a distinct individual from the person who pencils it, who may be different from the person who inks it, and so on. If we accepted Helmer’s criteria for what makes an art form great, the movies would be greater than comics, for you need not only good writers and good visuals, but good actors, and directors, and music. Along these lines, Wagner considered opera to be the greatest art form of all.

As I said, I don’t much care for hard and fast criteria like this, but if there’s one thing that video games have, it’s a lot of elements that require different talents.

The Super Smash Bros. series is a fun example of this, because the premise is so silly but the execution is so good.

First of all, the stages that the game takes place on are beautifully designed.

Some of the components of these stages are designed in order to be directly interacted with by the characters, but much of it is there for purely aesthetic purposes.

The opening sequence to the most recent game in the franchise features music composed by longtime video gaming music veteran Nobuo Uematsu.

There are three things that I find wonderful about this sequence. First, it is ridiculous–extremely over the top for a game that is basically Nintendo throwing together favorite characters from disparate popular games and making them fight each other. Second, for those of us who grew up as gamers, it is nostalgic–these are characters I’ve been interacting with since I was five years old. Finally, the music is good–the context may be strange but the fundamentals are sound.

What really blows me away about all of this is the sheer attention to detail that is given by producers of modern, high-end video games. It reminds me of a story I heard once about a Jan van Eyck painting that was so precise, modern day eye doctors were able to diagnose the subject.

Finally, at the risk of getting semantic, it must be said that there is an art to making a good game. A game does not need a good music score or visual design in order to be fun, but making a game that is fun is hard. Figuring out how to do that is a skill in its own right, one that people like Hideo Kojima and Shigeru Miyamoto possess to an abnormal degree.

Acceptance

A few months ago, my siblings, some friends, and I went to see the National Symphony Orchestra play video game music. Meanwhile, the Smithsonian has an upcoming exhibition on the Art of Video Games. Whether or not particular individuals think of video games as art, it’s clear that the high status institutions, seeking avenues to remain relevant, are increasingly welcoming contributions from video games.

It may be that, given time, video games will be widely accepted as an art form. Even under those circumstances, the art of video gaming, like all art, would remain a highly personal affair.

[blackbirdpie id=”141946128462643200″]