Create Value, Not Jobs

Treat all economic questions from the viewpoint of the consumer for the interests of the consumer are the interests of the human race.

-Frederic Bastiat

Public discourse on matters of the economy is and has always been dominated by the idea that the road to prosperity is to create jobs. In a moment of high unemployment, the “create jobs” rhetoric becomes that much more prevalent. We get a “Jobs Bill“; opponents of Obama’s reform call it “job destroying“; after a brief period of discussing deficits and debt national news outlets turned right back to talking about jobs.

Reading Tyler Cowen’s The Great Stagnation and Erik Brynjolfsson and Andrew McAfee’s Race Against the Machine, I was surprised to see that they considered lack of jobs to be one of the key problems of our times. Surprised, because I have become accustomed to economists arguing that jobs are not what matter, wealth is. Upon closer examination, however, I think that what they are arguing is consistent with that–they are putting it into the rhetoric of jobs because that is accessible to most people, but what they are saying is different from what a politician means when he calls for job creation.

A Very Human Propensity

This division of labour, from which so many advantages are derived, is not originally the effect of any human wisdom, which foresees and intends that general opulence to which it gives occasion. It is the necessary, though very slow and gradual, consequence of a certain propensity in human nature which has in view no such extensive utility; the propensity to truck, barter, and exchange one thing for another.

-Adam Smith, Inquiry into the Nature and Causes of the Wealth of Nations

In a barter economy, things are straightforward. I can only get something I want from you if I give you something that you want. I have to provide you with something of value.

This is still how the economy works on a fundamental level; money is just an intermediary between barter exchanges. Instead of giving you something you want, I give my employer or my customer something that they want. They give me money, which I can give to you so that you can turn around and get something you want. The person that you give it to accepts it because they can turn around and exchange it for something that they want.

Sensing a theme? Wealth is merely the ability to get things that we want. Since most of us are not independently wealthy, we have to work to create things that other people want in order to get what we want. The most common way to do this since the dawn of the industrial revolution has been to work for someone who needs human labor to accomplish some end–an end that is valued by consumers.

But it isn’t the only way–Henry Ford wasn’t an “employee”; he was an entrepreneur who developed more efficient ways to provide consumers with something of value at a lower cost. Moreover, he figured out how a little value could be added by large numbers of workers in the process.

There are also freelancers; people who are not employees nor employers, but work for specific clients at specific times. Rather than providing a valued service steadily over time, they do it on a case by case basis, and depending on the industry can face lean seasons and busy seasons.

Value, Not Work

The point is, our goal should never be to “create jobs”. Our goal should be to enable people to contribute something valued by other people. The value is the point, not the work. If someone finds a way to provide value to hundreds of millions of people and it requires no more effort from them than batting their eyelashes, that would be a win.

So why are economists like Cowen and Brynjolfsson talking about jobs? The stories they are telling, while far from the same, have a common theme which I interpret as follows: the forward march of technology has made it very difficult for people who have traditionally had low-skill or even middle-skill occupations to contribute value. As Arnold Kling succinctly put it:

The paradox is this. A job seeker is looking for something for a well-defined job. But the trend seems to be that if a job can be defined, it can be automated or outsourced.

He goes on to say that people who are capable of working in “less structured environments” are going to get a premium at this moment–in other words, people like entrepreneurs and freelancers.

His story, which he used to call the Recalculation but lately has referred to as Patterns of Sustainable Specialization and Trade (PSST), goes like this:

  1. One industry overwhelmingly dominates the economy (first agriculture, then manufacturing).
  2. Rapid technological change enormously increases the productivity of that industry while providing a lot of untapped potential in other areas.
  3. Since many fewer workers are needed now, there’s a period of massive unemployment before entrepreneurs figure out how to make the most valuable use of all the surplus labor.
  4. A new pattern of sustainable specialization and trade emerges that is optimal to the current state of technology.
In Kling and Brynjolfsson’s story, we’re at step 3. Technology has made it easy to replace workers with machines in old industries, but it is not yet obvious how those workers can contribute value in the young industries. Solving that problem is non-trivial.

An Important Distinction

This is not a matter of semantics. If you think the problem is a lack of jobs, all sorts of dangerous “solutions” may come to mind. Anything from having the government hiring en masse to do make-work, valueless jobs, to setting high tariffs and immigration restrictions so that domestic companies and labor do not have any foreign competition.

Frederic Bastiat was a 19th century French economic journalist who spilled a lot of ink attacking such foolish notions. You have to think about wealth from the perspective of the consumer. Yes, there would be more “work” to do if we cut off trade and immigration, but it would also impoverish just about everyone as the cost of getting anything would skyrocket. Getting a job is not an end unto itself; the whole point is to trade our labor for other things that we want. Getting a job at the cost of not being able to afford anything is an absurd proposition.

As for make-work jobs, I would rather the government send the poor a check to do what they want with than to force them to “play real job”. At least then they would have the time to think about how they can contribute something of real value!

Economists like Cowen and Kling get it. Farhad Manjoo does not. He wrote:

Most economists aren’t taking these worries very seriously. The idea that computers might significantly disrupt human labor markets—and, thus, further weaken the global economy—so far remains on the fringes.

Certainly technology can and is disrupting human labor markets–but that isn’t going to “further weaken the global economy”. It is going to increase our productivity, make it easier to provide consumers value for cheaper. It will make it hard for people replaced by machines to figure out how they can create additional value, for a time.

But we need to get our priorities straight; what we want to do is help people create value. Unless giving someone a job will enable them to create more value than it costs, the existence of that job is counterproductive.

How Could the Results Not Be Art?

Back when I was maybe a Sophomore or Junior in High School, I took the money I had saved from my summer job and used it to buy StarCraft. My relationship with StarCraft was to be a tragic one–I loved playing it but was absolutely mediocre at playing against other human beings.

Many, many years later I acquired the sequel and started playing it. One thing that struck me, looking at it now, is how rich the universe is that Blizzard created for the game. The world-building and history-building for the two alien species–as well as for mankind’s own progression–is excellent science fiction in its own right. Moreover, the settings, units, and cutscenes are visually rich.

When Roger Ebert declared that video games could never be art back in the spring of 2010, the accusation was not a new one. His post managed to light a fire under the debate at the time, and he did eventually backpedal somewhat. I hesitate to address the subject at all because, like arguments over whether modern art is really art, or any argument around the definition of a word, there isn’t actually a right answer. But a conversation on the subject with some friends recently got my mind working, and I felt an urge, as usual, to think it out through writing.

I’m not interested in what towering intellectuals or the Supreme Court have to say on the matter; and I’m certainly not interested in trying to make some kind of tautological, game=art by definition argument. How we experience art is a very personal thing, so it is from my personal experiences with gaming that I will proceed.

Status


First of all, I think it’s crucial to clarify what it is we’re actually arguing over. I think Alex hits this on the head:

This is really a debate over status, with gamers wanting to elevate the status of their activity, and by proxy, themselves. Opponents fear that calling games art will lower the status of art.

This is what it’s all about. It isn’t because people are really passionate about their particular definitions of the word “art”. It’s because we have culturally come to use the word art to talk about something higher, something refined.

If you read the original Ebert piece, he is quite clear about this–saying that “No one in or out of the field has ever been able to cite a game worthy of comparison with the great poets, filmmakers, novelists and poets.” He even admits that there are many legitimate definitions of art that are in common use which would encompass video games, but still he excludes them. Why? Because they have not demonstrated a propensity for greatness.

A couple of months later Ebert basically said he shouldn’t have stuck his nose in an area he knew next to nothing about–but his perspective isn’t limited to the inexperienced. Hideo Kojima, the mastermind behind the popular Metal Gear Solid series, also believes that video games are not art.

His argument is that video games are not art because they are entertainment. This seems to beg the question–is art never entertaining? That would miss the point, however–Hideo’s drawing a line between the lower form of expression–“entertainment”–and the higher one–“art”. He is saying that video games do not reach the status of art. When Ebert says it as an outsider it seems insulting, when an insider like Hideo says it we could consider it a kind of humility, but either way I have to disagree.

The Art of Making Video Games

Webcomic artist Der-shing Helmer once wrote:

Comics are, to me, the greatest art form because there are so many elements to master. A person who creates a full comic must at the very least; be a great planner, a great storyteller, have an eye for layout, know how to pencil, ink and color in a way that tells their own story, and of course, they have to know how to write. Becoming more than simply “proficient” at all of these components can easily take a lifetime to learn.

Of course, at big comic publishing houses like DC or Marvel, they have people who specialize in the components–the writer may be a distinct individual from the person who pencils it, who may be different from the person who inks it, and so on. If we accepted Helmer’s criteria for what makes an art form great, the movies would be greater than comics, for you need not only good writers and good visuals, but good actors, and directors, and music. Along these lines, Wagner considered opera to be the greatest art form of all.

As I said, I don’t much care for hard and fast criteria like this, but if there’s one thing that video games have, it’s a lot of elements that require different talents.

The Super Smash Bros. series is a fun example of this, because the premise is so silly but the execution is so good.

First of all, the stages that the game takes place on are beautifully designed.

Some of the components of these stages are designed in order to be directly interacted with by the characters, but much of it is there for purely aesthetic purposes.

The opening sequence to the most recent game in the franchise features music composed by longtime video gaming music veteran Nobuo Uematsu.

There are three things that I find wonderful about this sequence. First, it is ridiculous–extremely over the top for a game that is basically Nintendo throwing together favorite characters from disparate popular games and making them fight each other. Second, for those of us who grew up as gamers, it is nostalgic–these are characters I’ve been interacting with since I was five years old. Finally, the music is good–the context may be strange but the fundamentals are sound.

What really blows me away about all of this is the sheer attention to detail that is given by producers of modern, high-end video games. It reminds me of a story I heard once about a Jan van Eyck painting that was so precise, modern day eye doctors were able to diagnose the subject.

Finally, at the risk of getting semantic, it must be said that there is an art to making a good game. A game does not need a good music score or visual design in order to be fun, but making a game that is fun is hard. Figuring out how to do that is a skill in its own right, one that people like Hideo Kojima and Shigeru Miyamoto possess to an abnormal degree.

Acceptance

A few months ago, my siblings, some friends, and I went to see the National Symphony Orchestra play video game music. Meanwhile, the Smithsonian has an upcoming exhibition on the Art of Video Games. Whether or not particular individuals think of video games as art, it’s clear that the high status institutions, seeking avenues to remain relevant, are increasingly welcoming contributions from video games.

It may be that, given time, video games will be widely accepted as an art form. Even under those circumstances, the art of video gaming, like all art, would remain a highly personal affair.

If anyone ever doubted video games as an art form needs to play through portal 2 from start to finish. Pure art! Valve sure can make games
@MackandMesh
Mack and Mesh

Elusive Trust

The history of public relations as a field is deeply intertwined with the history of mass media. Post-War public relations in particular became very much a streamlined process of building relationships with publications and broadcast networks. This was what made the most sense under that technological paradigm–if organizations wanted to get a message or project an image to their relevant audiences, mass media were really the only available channels.

As with every communications institution that grew up in the mass media environment, public relations is currently in the painful process of redefining itself.

The more interested I grow in the subject, the more I think about an incident a couple of years ago that seems to me to be a microcosm of the challenges that new media poses to anyone stuck in a mass media mindset.

#amazonfail

Back in April of 2009, a number of books with homosexual themes suddenly stopped appearing in Amazon’s search results. A number of bloggers made a call to arms against what they perceived as deliberate and systematic discrimination. The whole thing blew up on Twitter under the hashtag #amazonfail. Amazon was slow to respond, but eventually explained that there had been a glitch originating in Amazon France that permeated their system. A fuller summary of events can be found here.

What ended #amazonfail were not the efforts of Amazon’s PR team, however, but a single post by Clay Shirky. I was astonished at the time how Shirky’s post seemed to dispel the whole thing like so much smoke in the wind. After he published it, a huge number of people shared the link, flooding the hashtag search for #amazonfail. Then, after less than a day, the volume of tweets on the subject dropped dramatically.

There are several aspects of this incident which are instructive.

First, during the entire time this thing was blowing up, there were a group of people who seized the moment to draw attention to discrimination against homosexuals in general. In general, it is the case that one interest’s PR disaster is someone else’s PR opportunity, and that plays a part in how the PR disaster grew in the first place.

Second, one influential outsider–Clay Shirky–was able to accomplish what no one on Amazon’s PR team could possibly have done. Precisely because he was an outsider, who had no stake in the matter, his words were more credible. He had a large audience–particularly in those days, when he was blogging fairly regularly. Many of the people who tweeted the link to his post hadn’t tweeted anything else on the subject of #amazonfail. This was noted with frustration by this guy, who had been extremely active during the whole fiasco and did not buy Shirky’s argument. I watched him at the time as he went from being part of the unified angry hoarde to among a few lone voices in the woods left who were still outraged after Shirky’s piece went viral.

What was way more valuable than anything anyone at Amazon could have done at the time was Shirky’s trust. If Shirky believed that Amazon’s explanation was just trying to cover up something they didn’t expect would draw so much attention, things would have gone very differently. Since Amazon had credibility–at least in Shirky’s eyes–he took them at their word.

Laying a Foundation

PR is and has always been about trying to influence the story about a company, or individual, or government. That hasn’t changed, and never will. What the modern technological shift has done is made it much more important to build trust. Trust in general, and trust from people like Clay Shirky in particular. If you are caught in a lie, it isn’t going to help you the next time around. And if you establish a pattern of being dishonest, the Clay Shirkys of the world will definitely not give you the benefit of the doubt.

When Facebook PR gets caught trying to bribe bloggers into smearing their competition, that doesn’t exactly inspire trust.

When Apple replaces your iPhone or Macbook, no questions asked, even if your warranty just expired like a week ago–that builds trust. I have never had this experience, but I’ve heard it from a lot of people I know, and it definitely made me feel more confident about my choice to buy an iPhone.

Trust is elusive; it is hard to build and easily lost. There is no formula for winning it, other than the obvious ones–be honest, admit mistakes, and engage your critics. None of which is any guarantee.

I don’t think PR needs to be redefined. The goals are constant over time; it’s just the tools that have changed.

We Are All Storytellers

From early man painting on the walls of caves, to the Enlightenment’s Republic of Letters, to J. K. Rowling writing books that are read in every corner of the globe, humanity has always been a species of storytellers. We are so deeply embedded in our own stories that we often lose sight of this, thinking of ourselves as the rational animal, or the moral animal. But these are parts of larger stories about ourselves; the one thing that is fundamentally human is not rationality or morality. It is our propensity to tell stories.

Characters in an Ongoing Story

Human memories are not stored like computer memory; they are recreated every time we attempt to call upon them. Moreover, they are recreated from the context of our present challenges and perspective. This makes a certain sense from an evolutionary standpoint–the only reason for an animal to have memory at all is to help it solve the problems it faces now and will face in the future.

In practice this means that our vision of our life up until now is always influenced by the story we believe about ourselves at this moment. For all of us think about ourselves as characters in an ongoing story; our life is not just a series of unrelated events but a cohesive plot with threads that run throughout. These threads have implications which influence the choices that we make.

Most of us also see ourselves as part of a much larger story; the story of our family, the story of our community or culture or nation, the story of mankind–and for many, the story of God. I am not a believer in that sense, but it has been my limited experience that feeling as though you are a part of a story much larger than yourself is both humbling and one of the few true paths to satisfaction. Of course, there are also those who devote themselves to larger causes and leave nothing for themselves–my story of the good life requires avoiding too much myopia and at the same time making sure not to give away too much of yourself.

Coherence is not Correctness

If called upon to explain something, especially our own actions, we will come up with a story that makes sense to us even if it quantifiably has nothing to do with reality. The fact that a story appeals to us does not make it accurate, even if it is internally consistent.

The past year–never mind the rest of the current economic downturn–has seen an enormous amount of imaginative storytelling in the economics community. In January, Tyler Cowen came out with The Great Stagnation, in which he described the current state of our economy as in a slowdown after a period of tremendous innovation and before another such period. Though Cowen is a libertarian himself, much of the criticism has come from libertarians who feel that The Great Stagnation contradicts their Schumpeterian story of markets feverishly generating innovation.

More recently, Erik Brynjolfsson and Andrew McAfee came out with Race Against the Machine, in which they argue that innovation has actually been accelerating and the problem is that machines are replacing people in many low-skill occupations and entrepreneurs have yet to figure out what new tasks those people can be productively put back to work doing. Arnold Kling tells a similar story, calling it variously the Recalculation Model and Patterns of Sustainable Specialization and Trade.

This debate between Brynjolfsson and Cowen is an excellent example of the difficulty in making a case for one story over another. Both individuals are very smart and extremely well versed in the literature and the data on the subject under discussion. Both employ statistics from official and academic sources. But statistics in and of themselves mean nothing–unemployment, for instance, is simply a number that the Bureau of Labor Statistics arrives at by conducting a survey of a subset of the population and then plugging the results into their statistical models. For the number to have any meaning, we have to have a story about the process that generated it–a story based on assumptions about meaningful sample sizes and what exactly unemployment is–not to mention why we should care.

Cowen has a story that involves, among other things, median income. Brynjolfsson came prepared with a story about why median income did not make the point that Cowen believed it did. Cowen came prepared with a story about why Brynjolfsson’s productivity statistics did not mean what he argued that they did. Seen as a struggle between contradicting narratives, the whole thing is really fascinating.

I find Brynjolfsson’s story much more appealing than Cowen’s, but both are perfectly coherent. Neither their appeal nor their coherence is really any help in figuring out if either of them are true.

Science is Storytelling

Science is a body of theories, and a theory is just a story about some specific aspect of the world we live in.

I think people tend to bristle when I say this because they think the implication is that science is as subjective as aesthetic taste, but that is not what I am saying. There is such a thing as a true story, or at least stories that are more or less accurate than one another. The fact that something is a story does not automatically relegate it to the same status as Little Red Riding Hood and general make believe.

What sets science apart from other bodies of stories is the powerful processes it has for filtering the more accurate and testable theories from those that are inferior in either or both regards.

Of course there are those who argue that science really is no different from fictional storytelling; but I find it hard to take this point of view seriously. Call my crazy, but I don’t think airplanes or the computer I’m typing this on came from nowhere.

Social Media is Storytelling

As people increasingly cluster into shared digital spaces like Facebook, Twitter, Tumblr, or Reddit, we are getting increasingly exposed to other people’s stories. This includes literally the ongoing stories of people’s lives–as they get married, graduate, or simply have breakfast–but also the stories that other people have invested in. From political ideologies to religious beliefs to celebrities’ lives, we are both connecting with others on the basis of which stories we have in common and getting exposed to the stories of people we’ve already connected with.

The idea that the internet would result in “daily me” or “filter bubble” silos where we’re only ever exposed to what we want to be exposed to is the complete opposite of what is actually happening. The reality is that it is getting harder to avoid being exposed to a family member’s politics, or a friend’s interest in subject matter we find boring or offensive, or any subject that we want to avoid, without avoiding the internet entirely.

Just look at what happens when one story blows up and overshadows another that some people care about. People get angry and frustrated because the fact that others do not share their priorities means that the story they care about gets a lot less attention.

Adapting to life after the great digital migration will require more of a tolerance for being exposed to stories we may have no interest in or have a reflexive hostility towards.

Confession of a Story Hoarder

Storytelling is as close as I come to having something like religion. I love people’s stories; the stories of their lives and the stories they tell to make sense of this crazy and confusing world that we live in. Though I do not believe in the divine, a part of me enjoys the idea of one great storyteller who is writing all of this as we go along. I have a terrible memory for just about everything, but I will always remember your story if you choose to share it with me.

Without being pushy about it, I don’t think anyone should be afraid to share their stories. That’s what human life is all about! It’s how we connect with one another, it’s how we find meaning.

That’s all I have for this particular story. Hope you found it worth the time it took to read it.

On Being Public

When I read Clay Shirky’s Here Comes Everybody, or Kevin Kelly’s What Technology Wants, I was confronted with some truly big think ideas that took me a few months to really digest and get my head around after I had finished them.

Reading Jeff Jarvis’ Public Parts, by contrast, felt more like continuing a conversation that had been going and would continue well after I finished the book. In part this is because I listen to This Week in Google every week, and watched as Jarvis formulated the idea for the book and gradually became chief publicness advocate on the show.

But I had also been having discussions on this subject on my own. When Google+ first launched on an invite-only basis, they were looking for all the feedback they could get. The whole concept behind circles was to give people maximum control over their privacy with the minimum possible effort, in response to concerns expressed about Facebook, not to mention Google itself. The debates that took place those first couple of weeks made me eager for Jarvis’ book to come out–so many people put so much stock on the importance of privacy, without taking into account the value of being public!

Jarvis has been practicing what he preaches for some time, talking openly about his prostate cancer and the unfortunate side effects of the treatment on his blog. One complaint that people have about this is that most of his readers subscribe to his blog to read his commentary on media, and don’t want to hear about his surgery-induced impotence. But Jarvis argues that he, personally, got value from being public, because he received a ton of feedback from people who had experience with what he was going through and were able to give him advice.

Morever, there is a social benefit to talking about such things in public–now, anyone who has prostate cancer can find Jarvis’ blog posts on the subject–as well as all the helpful comments on them–simply by searching on Google.

Jarvis doesn’t discount privacy, but he does think there is a dearth of publicness advocates relative to the big privacy advocacy industry that has cropped up. Since participating in those discussions on Google+, and reading his book, I have begun to look at things through the lens of the value of being public.

The recent Hyperbole and a Half, Adventures in Depression, is a perfect example. I have a lot of friends and family who have suffered through depression, but I haven’t lived it myself. The writer of H&H gave someone like me an insight into what it is like that I had never had. At the same time, the people I know who have experience with depression universally seemed glad that someone had so perfectly, and humorously, described what they have gone through.

For those of us who don’t command the kind of audiences that Hyperbole and a Half or Jeff Jarvis have, there is still plenty of reason to be public. You may not even realize who cares about the events in your life and is willing to make an effort to support and encourage you if you limit the people you share with. When my friend Kelly wrote this very brave and very honest post about what she’s been going through, she received some very wonderful feedback from an unexpected source, and it made her day.

This doesn’t mean that everything needs to be made public all the time. But it’s important to take the value of being public seriously, and to think hard about how you present yourself in public.

I created this site, under this domain, on a server I pay for, because as I was reading Jarvis’ book I started to rethink the way I was conducting my public life. I have no problem with my various social network profiles, and Blogger blogs, showing up when someone Googles me, but I wanted to have one site that I owned from beginning to end, under nothing but my own name. I don’t think everyone needs to do this, but I do think it’s something everyone should consider–and something schools should be informing students is an option.

So how could you benefit from being more public?