The Collision of the Personal and the Professional

BLOGS VS MAINSTREAM MEDIA…FIGHT!!

Eight years ago, when I was a pretentious, know-it-all 19-year-old, the conversation about new media was dominated by the rhetoric of bloggers and journalists, citizen and mainstream media. I had seen the blogosphere call out Dan Rather for running with forged documents as evidence. I learned of the role they played in making sure Trent Lott’s statements saw the light of day.

As far as I was concerned, newspapers and news outlets in general were old hat on their way to extinction, and blogs were the future.

What did I think this meant?

It meant that newspapers would unbundle. It meant that articles on the Iraq War or science features written by journalists with little background in the subject matter would be replaced by people living in Iraq, and actual scientists, who would have blogs. This wasn’t all in my head–such blogs existed and have only grown more numerous.

My thoughts on whether anyone would make money on this new way of things, and how, went back and forth. But I thought the future looked more like Instapundit and Sandmonkey than like The New York Times and The Washington Post.

As I have witnessed the evolution of the web over the years, aged to a point beyond a number ending in -teen, and followed the conversation and research on new media, my point of view has changed–to say the least.

It’s not simply that it was wrong, but that it was far too narrow. It has not only become clear that professional media, in some form, is here to stay. It has also become clear that the old blog vs mainstream media perspective misses the big picture.

What has happened is that many activities that we conducted in our personal lives have moved online; they have become digital and they have become some approximation of public. This has big implications for other people’s professions–one tiny corner of which is the impact that personal blogs have had on professional media. But it also has an impact on our own professional lives.

In short, the personal and the professional are colliding on a number of fronts. How this collision will play out is an open question.

THE PERSONAL BECOMES PUBLIC

The vast majority of my conversations with nearly all of my friends and family occur in a digital format. It happens on Twitter, Facebook, and Tumblr. It happens in email, in text messages, and in Google Talk chat windows. A very large proportion of this is public or semi-public.

I also enjoy writing about subjects that I’m thinking about. For that reason, I’ve maintained a blog in one form or another since 2004. I have never made one red cent off of my blogging. It has always been something I’ve done out of enjoyment of the writing itself.

Before the Internet, my writing would undoubtedly have been relegated to the handful of friends I could strong-arm into looking at some copies I made for them. I certainly wouldn’t be able to ask this of them on a very regular basis, so most of my writing would have remain unread–or, discouraged, I would have written a lot less.

The thing I enjoyed about blogging from the beginning was that it provided me with a place to put my writing where people could find it, without me having to make the imposition of bringing it to them. However, translating this private analogue activity into a public and digital one has implications beyond this simple convenience.

For one thing, it makes it possible for me to connect with new people who share my interests from anywhere in the world. It can also have implications for my professional life. If I write something insulting about my coworkers, or, say, something extremely racist, odds are it could get me fired and possibly have an impact on my long-term employability.

Conversely, just as I can discover and be discovered by new friends, I can also discover and be discovered by people who might provide me with a career opportunity–and indeed this happened to me earlier this year.

When enough enthusiasts move online in this manner, it begins to have consequences for the world of professional writing in general. One lone guy blogging about a few esoteric subjects isn’t going to have much of an impact. Over 180 million people writing about everything under the sun will have some serious implications. If we take Sturgeon’s Law at face value and say that you can throw 90 percent of that in the garbage, we’re still talking about tens of million of people writing pieces of average to excellent quality.

This is a dramatic expansion in the supply of written works. This has understandably made professional producers of written words sweat more than a little. One way of looking at this is from the old blog vs mainstream media perspective. A better way to look at it is from the understanding that any professional content outlet is going to have to adapt to the new reality of personal production if they want to survive.

That process of adaptation has been messy and is still ongoing.

THE PROFESSIONAL BEGINS TO ADAPT

What my 19-year-old self did not realize is that the media business has never really sold information. It has sold stories, it has sold something for groups to rally around and identify themselves with or against. There is still money to be made by selling this product. Clay Johnson has documented some methods that he finds vile, but there are plenty of perfectly respectable ways to do it as well.

Take The Verge–a technology site that launched last year. It does not suffer from the baggage of a legacy business–it was born online and lives online. It was created by a group writers from Engadget, another professional outlet that was born on the web, who thought they could do better on their own. I have argued that their initial success was made possible in part by the fact that the individual writers had built up a community around them, through their podcast and through their personal Twitter accounts.

The Verge invests a lot in building its community. The content management tools it offers in its forums are, they claim, just as powerful as the tools they themselves use to write posts. They frequently highlight forum posts on their main page. Their writers engage with their readers there and on various social media.

Another way that the professional world has adapted is by treating the group of unpaid individuals producing in their space as a sort of gigantic farm system for talent and fame. This system is filled with simple enthusiasts, but also includes a lot of people consciously trying to make the leap to a career in what they’re currently doing for free. Either way, a tiny fraction of this group will become popular to varying extents. Rather than competing with this subset, many existing professional operations will simply snap these individuals up.

Take Nate Silver, the subject of much attention this election cycle. He started writing about politics in a Daily Kos diary, then launched his own blog on his own domain. Eventually, this was snapped up by The New York Times. The article on this is telling:

In a three-year licensing arrangement, the FiveThirtyEight blog will be folded into NYTimes.com. Mr. Silver, regularly called a statistical wizard for his political projections based on dissections of polling data, will retain all rights to the blog and will continue to run it himself.

In recent years, The Times and other newspapers have tapped into the original, sometimes opinionated voices on the Web by hiring bloggers and in some cases licensing their content. In a similar arrangement, The Times folded the blog Freakonomics into the opinion section of the site in 2007.

Forbes did this with Modeled Behavior; Andrew Sullivan’s Daily Dish has done this with The Atlantic and now The Daily Beast. In publishing, Crown did this with Scott Sigler, and St. Martin’s Press did this with Amanda Hocking.

Suffice to say, these markets continue to be greatly disrupted. However, I do not think the adapted, matured versions of these markets will involve the utter extinction of professional institutions.

YOU GOT YOUR PROFESSIONAL IN MY PERSONAL

I consider my Twitter account to be extremely personal. No one is paying me to be there. With a handful of exceptions, I don’t have any professional relationships with the people I follow or am followed by there.

But there are definitely people who I feel have followed me because of some notion that it might help their career. Not because I’m some special guy who’s in the know, but because they think, say, that following everyone who seems to talk a lot about social media will somehow vaguely translate into success in a career in that industry. A lot of people who consider Twitter a place for human beings to talk to one another as private individuals have a low opinion of such people.

But I cannot deny that I have, on occasion, used Twitter to my professional advantage. And it’s not as though there’s a line in the sand for any of these services stating FOR PERSONAL USE ONLY. It’s difficult for journalists of any kind to treat anything they say in public as something that can be separated from their profession. I have seen some create distinct, explicitly labeled personal Twitter accounts, with protected tweets. Of course, Jeff Jarvis would point out that they are merely creating another kind of public by doing so.

Moreover, more and more services we use in our personal lives are having implications for our employers. How many of us have had an employer ask us to “like” the company page on Facebook? Or share a link to a company press release? These services are far too new for us to have expectations set about them. Is this overstepping the boundaries of what is acceptable, or is this a legitimate professional responsibility we have to our employers?

In a world where a personal project or an answer on Stack Overflow can be added to your resume when applying for a job, the line between personal and professional is not quite as sharp as it used to be.

Take Marginal Revolution as an example. Is it a personal or a professional blog? Certainly Tyler Cowen and Alex Tabarrok are not paid to write what they post. But they are using the blog as a venue for participating in the larger conversation of the economics profession. Of course, they also post on any number of specific subjects that catch their interest. It is both a platform to promote their books, as well as to solicit advice from their readers on what restaurants to check out when they are traveling.

Are categories like “personal” or “professional” even useful for describing things like Marginal Revolution? Is it an exceptional case, or–its particular level of popularity set aside–is it the new normal?

How Has the Web Evolved?

Here’s a pocket history of the web, according to many people. In the early days, the web was just pages of information linked to each other. Then along came web crawlers that helped you find what you wanted among all that information. Some time around 2003 or maybe 2004, the social web really kicked into gear, and thereafter the web’s users began to connect with each other more and more often. Hence Web 2.0, Wikipedia, MySpace, Facebook, Twitter, etc. I’m not strawmanning here. This is the dominant history of the web as seen, for example, in this Wikipedia entry on the ‘Social Web.’

But it’s never felt quite right to me.

-Alexis Madrigal, Dark Social: We Have the Whole History of the Web Wrong

Madrigal’s summation is definitely not a strawman. Take the following passage from Paul Adams’ book, Grouped:

The second shift is a major change in the structure of the web. It’s moving away from being built around content, and is being rebuilt around people. This is correlated with a major change in how people spend their time on the web. They’re spending less time interacting with content, and more time communicating with other people.

Is this the case? Was the pre-Facebook, pre-Twitter web really just a bunch of “documents linked together”, as Adams claims elsewhere in the book?

Madrigal doesn’t think so, and neither do I.

The Web Was Always Social

I spent most of the 90s as a teenager in rural Washington and my web was highly, highly social. We had instant messenger and chat rooms and ICQ and USENET forums and email. My whole Internet life involved sharing links with local and Internet friends. How was I supposed to believe that somehow Friendster and Facebook created a social web out of what was previously a lonely journey in cyberspace when I knew that this has not been my experience? True, my web social life used tools that ran parallel to, not on, the web, but it existed nonetheless.

Madrigal’s experience parallels my own. It might be more appropriate to speak of a social net rather than a social web, since the social technology of my youth included AIM and ICQ and IRC. The web itself was definitely social, however–in Middle School, I spent a lot of time on fan sites for the video games and TV shows that I was into, and those sites had forums and Java chatrooms. The sites became the common space the site community used, and individual connections could then be followed up through email and ICQ and other chat services with personal accounts.

eGroups–an email list service that was eventually swallowed up by Yahoo!–was another source of social activity for me; it became a sort of forum in my inbox.

Later, I got deeply into EZGroups, a service for quickly making your own forum with a decent discovery mechanism for finding other people’s. Then there was also LiveJournal; which came closest to the modern idea of how social graphs work.

In short, there were a ton of services for connecting with other individuals and groups, and interacting with people on those services was what I did with the vast majority of my time online in those days. And I don’t think Madrigal or my experiences are unusual in this regard.

The Web Was Always Viral

Before “going viral” was the phrase we used to describe when a piece of content suddenly went from a few dozen views to a few thousand, or tens or hundreds of thousands, or millions of views, the phenomena existed. One example that stands out in my mind from when I was in High School (over ten years ago) is “Irrational Exuberance (Yatta)“.

Yatta was part of a class of videos and animations that everyone suddenly knew about. Hanging out at a friend’s house, one-on-one or in groups, eventually ended up having a set amount of time where we all gathered around a computer screen and watched the videos we’d all found that everyone just had to be sure to see too.

This was before YouTube of course. The sources were big flash portals like Newgrounds and Albino Blacksheep. Sometimes they were just some random, small website. But there was never any shortage of stuff to sell on our pre-Facebook social networks, or in our old fashioned in person ones for that matter!

We Have Increased What We Can Measure

One thing that Madrigal and Adams agree on is that modern social networks have vastly increased our ability to measure what people are doing online. Here’s Madrigal:

Second, the social sites that arrived in the 2000s did not create the social web, but they did structure it. This is really, really significant. In large part, they made sharing on the Internet an act of publishing (!), with all the attendant changes that come with that switch. Publishing social interactions makes them more visible, searchable, and adds a lot of metadata to your simple link or photo post. There are some great things about this, but social networks also give a novel, permanent identity to your online persona. Your taste can be monetized, by you or (much more likely) the service itself.

And Adams, in Grouped:

The third shift is that for the first time, we can accurately map and measure social interaction. Many of our theories can now be quantitatively tested. This is incredibly exciting for researchers, but it will also transform how we think about marketing and advertising. Many things that were previously hard to measure, for example, word of mouth marketing, can now be analyzed and understood. We can now start to measure how people really influence other people, and it will change how we do business.

It is certainly true that Twitter and Facebook have increased the measurability of certain activities. As someone in the digital advertising industry, and someone with a strong interest in the social sciences, this is certainly and exciting shift to me. And certainly, the information being generated will inform the decisions made by these services, as well as make it more possible to sustain such services financially–so in that regard, measurability will have an impact on user experience.

But measurability isn’t really all that directly exciting to me as a user of these services. Can we really encompass the big change that’s happened since the onset of the modern social web under the heading of what we’re able to quantify?

Common Spaces

As you may have guessed, I don’t think the measurability really covers it. And I don’t think Madrigal’s characterization of the transition as one to publishing really does the trick, either–it is just as much “publishing” to write in a publicly viewable forum as it is to post a status on Facebook.

One change that has happened rather rapidly is the emergence of gigantic, global common spaces. It may be that many of the basic activities people do on Twitter and Facebook–sharing links, having conversations one on one or in groups, sharing pictures–are not new, but what is new is that it is being done in a space shared by a ridiculously large percentage of the connected world.

Twitter activity is public by default, and anything you say on it can pass into a gigantic number of people’s timelines in an instant if it is carried far enough on a wave of retweets. Conversely, your circle of 15 friends you follow on Twitter who all live near you in DC may seem as insular as any old fashioned forum or email list, but the fact of the matter is that tweets from people anywhere in the world can enter your timeline at any point when any of those 15 friends retweet them.

Facebook is ostensibly more private than Twitter, but in practice content can travel just as far and wide–farther and wider, in fact, as its service reaches a billion users, almost half of the connected world.

We are only beginning to understand what it is like to live with these enormous online common spaces. Serendipity, which some people seemed to think was going to be killed by algorithms and automation, is a larger force than ever. This includes the case where you’re having a conversation with one person on Twitter and someone who follows you both jumps in because they find it interesting, or entertaining. It also includes the case where you are discovered and end up with a job.

Somewhat more controversially, it may even include the case where one man’s Facebook group leads to the overthrow of a 30 year old regime.

Always Connected

I would be derelict if I didn’t mention the obvious impact that mobile devices are having on the evolution of the web. From a pure input perspective, we can now record anything from anywhere and share it immediately. It is even possible to stream live video from a mobile device–so that anything can be covered in near real-time.

From a usage perspective, it’s like being able to carry around your friends in your pocket, all the time. When I was coming up to New York every other week for work, it was a comfort to be able to have conversations with my friends and see what they were saying on Twitter and Facebook. Especially on those nights when I didn’t have anyone to meet up with for dinner, and had to strike out on my own.

Much has been said about how mobile is revolutionizing our lives, and I don’t have too much to add to that here. But we can’t talk about the web without thinking about how its evolution is tied inextricably to the increasing mobility of our connected devices.

The Maturing Ecosystem

One big thing that has happened since I started using the web is that the ecosystem of social, content, and commercial services has matured significantly. What was there before our big common spaces have adapted to the existence of those spaces. As an example, I can’t help but see Instapundit as a blog that is frozen in a very particular time in the web’s evolution–today, if the vast majority of your updates are a few words and a link, it makes much more sense to publish them on Twitter than to have a full-fledged blog.

Of course, Instapundit became popular long before Twitter existed, and has no reason to change since his audience has stuck with him. But I have to think that if Glenn Reynolds had started it 10 years later, it would have been on Twitter.

Meanwhile, more mainstream publications have migrated completely online, and more publications that were born on the web have become mainstream. The conversation about whether amateur bloggers are going to replace professional publications has basically died out, as an ecosystem which includes both has become quite robust. Twitter and Facebook act a glue that brings content and people together on a scale unprecedented in the web’s history.

In tech circles we love to talk about what is dead, and what new thing is replacing something that has already grown old in the short timescale of modern technology. But the fact of the matter is that blogs did not kill professional content, and Twitter did not kill blogs. As our connected services evolve, specific companies may fall but it’s unlikely that any particular category of thing is going to truly die. What changes is their role, as the ecosystem absorbs the new tools, new conventions emerge for the old tools, and people simply get a better idea of just what any of this is good for.

The Web Has Evolved

The web, and the Internet more broadly, have undeniably changed in my lifetime. The story of that change, however, is often far more subtle than simply “it has become social” or “viral” or even “mobile”. People are still talking to one another and sharing things online; that has not changed. What has changed is how we’re going about it.

How to Avoid Gas Lines, Now and Forever

Let’s imagine for a moment what we want to happen when gas stations are all of a sudden faced with a shortage of gas.

We would want to encourage consumers to consume less gas. There are several ways they could do this. For trips they absolutely need to take, they could carpool much more often than they used to. There might be a whole set of trips that they decide they shouldn’t take right now, during this time of increased scarcity, so as not to reduce the overall supply further.

We would also want to encourage suppliers to divert from their usual routine to bring more gas to the area with the shortage.

So how do we get to a world where this is what happens during a shortage? Do we have to make laws about how gas is allocated nationally? About how many miles people are allowed to drive, or what the minimum number of passengers per car needs to be? Or more directly, how much gas per person we’re allowed to consume?

The Ideal Policy

In fact, there is a much more elegant solution, totally uncontroversial among economists and proven by the American experience of the 1970’s: just allow prices to rise. As soon as the price controls begun by Nixon were overturned, gas lines in America became something mostly confined to history books.

I say mostly because every so often, after a disaster like Sandy, we hear about gas lines cropping up temporarily again. But surely this is inevitable, right?

Wrong. What has happened consistently in these scenarios is that prices have not been allowed to rise.

You might ask how can economists be so cold and unfeeling as to say that the victims of a disaster should have to pay higher prices. Well, let’s do a little thought experiment.

What would happen if prices in New Jersey shot up to an astounding $20 per gallon?

The person who was thinking of doing a 5 minute drive instead of a 30 minute walk might opt to walk instead, since filling up will be so expensive. The group of friends all going to the same place a 30 minute car ride away or farther might all pool their money to pay for the gas. In other words, people will economize on their gas usage.

Meanwhile, since they are paying the cost in money rather than in time spent in gas lines, gas stations will be gaining more funds, which in turn will allow them to outbid gas stations outside of Jersey for additional supply. The influx of supply will eventually–and history has demonstrated that this can happen surprisingly quickly–start bringing prices back down.

In short, during a shortage the price system both forces people to reduce their consumption and bids additional supply towards the area that needs it the most. In other words, it accomplishes exactly what you would want to accomplish during a shortage.

Every alternative to the simple solution of relying on the price system has proven itself pathetically inept. In the 1970’s they tried a whole gamut of different regulatory allocation approaches, and nothing worked until the price controls were ultimately revoked entirely.

It is frustrating that we still have not learned this lesson. But I suppose history has also demonstrated that we are terrible at learning from repeated failure.

 

Suggested Further Reading:

Education and Culture

I have a story, which you may find plausible, about the nature of education.

Without touching on the loaded subject of education’s purpose, I think we can meaningfully talk about what its function has been, in practice.

Historically, the function of education has been to initiate young people from affluent families into a high-status culture. It has not been used to provide practical skills that would be put to use in the workplace. Leo Stein, one of Gertrude Stein’s brother, attended Harvard and then Johns Hopkins for college, yet he was rich enough that he never needed to work to support himself. He had no need or desire to accumulate human capital, nor to send any signal to the labor market.

Education is an extension of the universal human desire to be part of a group–especially if being part of that group makes you feel superior to those who are not.

Whether or not that is entirely still the case is a more complicated question. Since at least the progressive era, education has been viewed as an instrument for practical skillbuilding, and something that should be universal. Rather than rebuild education to suit that purpose, however, we have taken traditional education and tried to force it into a new role. Which may be one explanation for why it has been so bad at filling that role.

And we still look down on vocational schools, which are much more specifically tailored to skill building. That alone should tell you something about the true purpose of education even at this late date in its history.

I described previously how the economics department at George Mason University served as a hotbed for spreading a certain culture and ideas, and how most university departments played a similar role. Charles Nauert, Jr. has argued that it the emergence of the studia humanitatis curriculum in Europe played an enormous role in the cultural event that we have come to call the Renaissance. Education and culture have been inextricably linked for a very long time.

It seems possible to me that economists have entirely missed the source of the economic impact of education. Maybe it isn’t about getting skills or signaling that you’re a certain caliber of worker. Maybe it has sped up the diffusion of innovations by making more people more like one another in certain dimensions. Or maybe it’s about reducing transaction costs by giving people a common set of points of references, or building trust within the group of educated individuals.

Whatever it is, I’m coming to suspect that the economic impact of education is mostly indirect; and that the function it serves remains, as it was historically, a cultural one.

Our Lumpy Future

The total value of the companies we’ve funded is around 10 billion, give or take a few. But just two companies, Dropbox and Airbnb, account for about three quarters of it.

In startups, the big winners are big to a degree that violates our expectations about variation. I don’t know whether these expectations are innate or learned, but whatever the cause, we are just not prepared for the 1000x variation in outcomes that one finds in startup investing.

-Paul Graham, Black Swan Farming

The freelance writer has to hustle every day for gigs, and some months are better than others. The staff editor is always well fed; the freelance writer is hungry on some days. Then the day comes when print finally dies, the magazine industry collapses, and the staff editor gets laid off. Having built up no resilience, he will starve. He’s less equipped to bounce to the next thing, whereas the freelance writer has been bouncing around her whole life— she’ll be fine. So which type of career is riskier in the long run, in the age of the unthinkable?

-Reid Hoffman and Ben Casnocha, The Start-up of You

The Industrial Revolution was characterized by the rise of well-defined, specialized, routinized jobs. Adam Smith made his observations about pin factory workers more than a hundred years before Henry Ford’s assembly line became an icon of modernity and efficient industry.

With routine work came routine jobs, and routine paychecks. Modern industrial era employment, while taken for granted today, is something of a historical novelty. Before this Bourgeois Era we live in, an overwhelming supermajority of humanity lived on farms, and the rest were aristocracy or warlords of one stripe or another.

Farm life was lumpy–every year had the high point of the harvest, sometimes even with a subsequent festival in the nearby town. Then every year had its long, hard winters. Then there were particularly lumpy years; a bad harvest could wipe out a whole village while a very good one would be the subject of conversation for years afterwards, and might result in a temporary growth in the population.

Lumpiness in Modern Life

This is not to say that modernity has been all smooth trend lines and uninterrupted flow. Nassim Taleb would certainly protest such a claim. Even if we are speaking in strictly economic terms, there have been big, dramatic events of the negative and positive sort. The Great Depression comes to mind. The hyperinflation of Weimar Germany. On the flipside, the German and Japanese post-war Miracles. The sudden gentrification of American cities that had been in decline for decades.

And on a company by company and individual by individual basis, there has been a lot of lumpiness. Google went from a Stanford computer science project to a multibillion dollar company within a handful of years. Apple rose and fall and then rose far more spectacularly than ever.

Taleb has argued that the more informational economic activity is, the lumpier it will be. Thus, the content industries, and finance, have always been lumpy. The scalability of informational goods makes it possible for a book, such as Harry Potter, to be a best seller across the entire planet, raking in enormous amounts of money. Meanwhile hundreds of thousands of books that come out every year won’t sell more than a handful of copies; for we are a groupish species and we like to focus on a small subset of things that can create a common experience.

This latter piece is preferential attachment; if one person’s consumption of an informational good increases the odds that someone else will consume it “by even a fractional amount“, it will create extremely skewed distributions. And there are well understood reasons why the book business has always been skewed, and why globalization and digitization will only skew it further.

Also skewed, though not quite so dramatically, is income in a human lifespan–into your “peak earning years”. Then there is the well documented phenomenon of extremely skewed healthcare spending–dramatically backloaded into the last handful of years and handful of months of your life.

So we are no strangers to lumpiness. But it seems to me that we are blind to it. As Paul Graham notes, it “violates our expectations”. We expect life to be more like the smooth streams of compensation that the industrial revolution has provided us.

We are going to have to adjust, though, because there is good reason to think that those smooth streams are going away for good. Things are about to get a lot lumpier.

The Robots Are Coming

The paradox is this. A job seeker is looking for something for a well-defined job. But the trend seems to be that if a job can be defined, it can be automated or outsourced.

Arnold Kling

Our capacity to automate seems, at times, to be limitless. One thing is for certain, however, and that is that if it is repetitive and has clearly defined parameters, we can automate it. The Kling quote above actually understates the extent of the circumstances by bringing outsourcing into it. The fact is that even in China, where labor is much, much cheaper at a far higher scale than any developed nation, they are moving towards automation. Does this sound familiar:

China’s manufacturing output was over 70% greater in 2008 than it was in 1996. Over the same period, manufacturing employment in the country declined by more than 25%.

This is the exact same trend that we have been seeing in the United States for half a century, only, as with everything else, China is playing catch up and so the trend has accelerated there. While politicians and pundits in America blame outsourcing for the loss of manufacturing jobs, the fact of the matter is that our manufacturing output never stopped growing; it was only manufacturing employment that declined.

This trend, explored at length in Race Against the Machine, is not without historical precedent. Remember, we were an agricultural nation before we were an industrial one.

A century ago, 40 percent of Americans worked on farms. Today, the farm sector employs about 3 percent of our workforce. But our agriculture economy still outproduces all but two countries.

Some believe that the pattern will play out in a similar way all over again–manufacturing and anything else that can be automated will shrink down to single-digit percentages of our employment. But entrepreneurs will think up new ways to put people to work en masse.

A more pessimistic story, believed by Robin Hanson for example, is that there’s no going back. Automation has grown so good that the majority of people simply will never gain the skillset to be able to provide comparable value, in any sector. Anyone who has a successful company will be able to use automation to produce on an unimaginable scale and thus become unimaginably rich even by today’s standards, but a large segment of the population will not be able to find any way to contribute value whatsoever.

I am proposing a different story: we will all learn to live with ultra-lumpy incomes.

A World of Black Swan Farmers

Join me for a minute in our automated future. It only takes a few tens of thousands of people to produce agricultural and manufacturing output per capita on a scale we would consider absurdly large today. Delivery and postal workers have been put out of work by tacocopters. Maids, fast food workers and cooks have all been replaced with robots. What are we to do?

Well the first upside is that everything is extremely cheap. We can produce so much food, and so much stuff, and provide so many services, that our huge supply will drive prices straight down. So you don’t need a lot of money to maintain a standard of living that would be considered affluent by historic standards.

OK, but where does even that little bit of money come from?

We will all have to adjust to the lack of routinized and easily definable jobs by becoming a little like venture capitalists. We will put out blog posts, and Kindle books, and apps, and any other sort of informational good that we can, in the hopes that one blockbuster will support us for a while.

Since these black swans are, by necessity, very rare on a case by case basis, we will probably combine our efforts and share the spoils. The most obvious way would be for the member of the family who manages to get a hit to take care of the rest until the next hit comes along. But perhaps we will explore many more kinds of partnerships and legally binding revenue-sharing arrangements in order to cope with this radically different labor market.

And again, because of expanding supply and falling prices, you need not have a big hit in order to support yourself. Maybe 100,000 blog views will pay enough, through AdSense revenue, to feed you for a few months.

I can imagine a world where people have only periodic income and they have a higher standard of living than we currently do. I can imagine things would seem psychologically more tenuous in such a world, but it’s not as though anything was ever guaranteed under the old way. And maybe we will adapt, psychologically.

Do you think that you could live happily in a high volatility gig economy?