In Praise of Blogosphere

In 2004 I jumped into the world of blogging in a big way, both in the sheer amount that I read on a daily basis and my personal output in a widely-unread blog with a name only a pretentious 19-year-old could come up with. At that time, Very Serious Person that I was, I hated the term “blogosphere”. At a time when I was angrily arguing that the Mainstream Media was overrated and bloggers were the future, “blogosphere” seemed awkward and embarrassing. I tried to avoid using it, instead resorting to things like “blog ecosystem”. In the end, I relented, because it was clear that blogosphere was here to stay, and it began to feel even more awkward to be the only one not saying it.

Nine years is a long time in the cycle of media storytelling, to say nothing of technology and technological adoption. Nowadays you’ll still get the occasional scare piece to the tune of “Jesus Christ the Internet is nothing but one, big, angry mob of wide-eyed vigilantes!” but these are at least as likely to cover people’s activities on Twitter and similar social media as on blogs. For the most part, the role of the blog has been cemented and matured, within a larger (dare I say it?) ecosystem of social interactions and media platforms.

There is greater appreciation for the fact that a blog is nothing but one part of the greatly lowered barriers to entry into producing public content, and that non-professionals can and do contribute a great deal to the public conversation every day. Some of them have aspirations of becoming professional contributors to this conversation, but many do not.

As perceptions and usage of the blog have matured, there has been an increasing allergic reaction to some of the rhetoric of the early adopters. More than once I have seen friends I follow on Twitter complain about the term blogosphere and wish that its usage would cease.

I want to defend the much maligned blogosphere, and not just on the (very valuable) rule of thumb that if 19-year-old Adam Gurri believed it, there was probably something crucially wrong about it. Blogosphere was a term coined and adopted by people who were sick of the modes of conversation inherited by modern media from our mass media past. Bloggers who wrote about new media in the first half of the last decade were sick of bad fact-checking and baked in moral assumptions being hidden under the veil of a style of fake objectivity. Most of all, they were sick of people taking themselves too damned seriously.

That is why blogs writing about rather serious topics nevertheless took on silly or offensive names such as Instapundit or Sandmonkey. It’s why many posts that had ever increasing weight in the public discussion used an inordinate amount of profanity to make their points.

The equilibrium has shifted since then; now there are a greater number of professional outlets that have adapted their rhetoric to be less stilted and less objective, if still intended to be respectable. And the blogs that carry weight have, in my subjective perception, seemed to tone down the juvenile naming conventions and swearing in posts, to a certain extent.

Nevertheless, I like blogosphere because it has that overtly geeky, tongue in cheek side to it that I think is unlikely to become irrelevant in my lifetime. We could all stand to take ourselves a little less seriously.

Rereading The Long Tail

This was officially launch week for The Umlaut, a new online magazine that my friends Jerry Brito and Eli Dourado have started. There are five of us who will be regular writers for it. For my first piece, I thought it might be fun to go back and re-examine The Long Tail almost seven years after it was published.

The Long Tail had a big impact on the conversation around new media at the time, and was very personally significant. The original article was published in October of 2004, a mere month before I began blogging. Trends in new media were a fascination for me from the beginning, so I kept up with Chris Anderson’s now-defunct Long Tail blog religiously.

A 19-year-old and a tad overenthusiastic, I strongly believed that the mainstream media was going the way of the dinosaur and would be replaced by some distributed ecosystem of mostly amateur bloggers. In short, I thought the long tail was going to overthrow the head of the tail, and that would be that. Moreover, I thought that all content would eventually be offered entirely free of charge.

That was a long time ago now, and my views have evolved in some respects, and completely changed in others. I think that the head of the tail is going to become larger, not smaller, and professionals are here to stay–as I elaborate on here. However, I do think that the growth of the long tail will be very culturally significant.

When I began rereading The Long Tail, I expected to find a clear argument from Anderson that he thought the head of the tail would get smaller relative to the long tail. Instead, he was frustratingly vague on this point. Consider the following quote:

What’s truly amazing about the Long Tail is the sheer size of it. Again, if you combine enough of the non-hits, you’ve actually established a market that rivals the hits. Take books: The average Barnes & Noble superstore carries around 100,000 titles. Yet more than a quarter of Amazon’s book sales come from outside its top 100,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is already a third the size of the existing market—and what’s more, it’s growing quickly. If these growth trends continue, the potential book market may actually be half again as big as it appears to be, if only we can get over the economics of scarcity.

Let us unpack this quote a little.

First, Anderson is offering the fact that more than 25% of Amazon’s book sales occur outside of its top 100,000 titles as evidence of the revenue potential for the long tail. But this is very flawed conceptually. At the time of the book’s publication, Amazon sold some 5 million books. If nearly all of the additional revenue beyond the top 100,000 titles was encompassed by the following 100,000 titles, then 4% of Amazon’s titles account for nearly all of its book revenues. And there is good reason to believe that that is exactly how the distribution played out, back then and now.

The fact that 200,000 is a larger number than 100,000 is indeed a significant thing; it shows the gains that a company can make from increasing their scale if they are able to bring down costs enough to do so. But to  claim that this is evidence of the commercial potential of the long tail is flat out wrong. We’re still talking about a highly skewed power law distribution–in fact, an even more skewed power law distribution, as we used to speak of 20% of books accounting for 80% of the revenue, and here we are talking about 4% of the books accounting for something on the order of 99% of the revenue.

This argument appears several times throughout the book, in several forms. At one point he talks about how the scaling up of choices makes the top 100 inherently less significant. Which is true, but it does not make the head of the tail any less significant; it just means that there are a larger quantity of works within that head.

Second, this bit about “if only we can get over the economics of scarcity.” Anderson argues, repeatedly, that mass markets and big blockbusters are an artifact of a society built on scarcity, and the long tail is a creation of the new economics of abundance. This is wrong to its core.

As I argue in my first piece at The Umlaut, we have been expanding the long tail while increasing the head of the tail since the very beginning of the Industrial Revolution. Scale in the upward direction fuels scale in the outward direction. Consider Kevin Kelly’s theory of 1,000 true fans, the paradigm of the long tail success.

Assume conservatively that your True Fans will each spend one day’s wages per year in support of what you do. That “one-day-wage” is an average, because of course your truest fans will spend a lot more than that.  Let’s peg that per diem each True Fan spends at $100 per year. If you have 1,000 fans that sums up to $100,000 per year, which minus some modest expenses, is a living for most folks.

Now ask yourself: how do we get to a world where someone can make a living by having 1,000 true fans, or fewer? Or 1,000 more modest fans, or fewer?

One way we get to that world is through falling costs. If we assume a fixed amount that some group of fans is willing to pay for your stuff, then progress is achieved by lowering the cost of producing your stuff.

Another way is for everyone to get wealthier, and thus be able to be more effective patrons of niche creators. If I make twice as much this year as I did last year, then I can afford to spend a lot more above and beyond my costs of living.

Another conceivable way is sort of a combination of the first two–falling costs for the patrons. If I make as much in nominal terms as I did last year, but my costs of living fall by half, then it is effectively the same as though I had doubled my income.

Put all three of these trends together and you have perfectly described the state of material progress since the onset of the Industrial Revolution. Huge breakthroughs in our productive capacities have translated into a greater ability to patronize niche phenomena.

Obviously the personal computer and the Internet have taken this trend and increased its scale by several orders of magnitude–especially in any specific area that can be digitized. But that doesn’t mean we’ve entered a new era of abundance. The economics are the same as they have always been. The frontier has just been pushed way, way further out.

Moreover, the blockbuster is not an artifact of scarcity. Quite the opposite. The wealthier and more interconnected we are, the taller the “short tail” can be. In my article, I mention the example of Harry Potter, which was a global hit on an unprecedented scale (this Atlantic piece estimates the franchise as a whole has generated something like $21 billion). Hits on that scale are rare, giving us the illusion at any given moment that they are a passing thing, a relic of a bygone era of mass markets. But the next Harry Potter will be much, much bigger than Harry Potter was, because the size of the global market has only grown and become more connected.

Consider Clay Shirky’s observation that skew is created when one person’s behavior increases the probability that someone else will engage in that behavior “by even a fractional amount”. His example involves the probability that a given blog will get a new reader, but it extends to just about every area of human life. And the effect he describes, but does not name, is the network effect–one additional user of Facebook increases the probability that they will gain yet another one, one additional purchaser of a Harry Potter book increases the probability that yet another person will purchase it.

And we know, from the diffusion of innovations literature, that there comes a certain point at which one additional person increases the probability by a lot more than a fractional amount. As Everett Rogers put it:

The part of the diffusion curve from about 10 percent adoption to 20 percent adoption is the heart of the diffusion process. After that point, it is often impossible to stop the further diffusion of a new idea, even if one wished to do so.

Now, if network effects are what create skew in the first place, and we are living in the most networked age in history, how plausible does Anderson’s argument seem that the head of the tail will be of decreasing significance because of new networks?

What Does He Really Think?

Part of what’s frustrating about the book is that Anderson doesn’t really make a solid claim about how big he thinks the head of the tail is going to be relative to the tail. He provides some facts that are erroneous to answering this question, such as the Amazon statistic described above. In some places he seems like he’s saying the head will be smaller:

The theory of the Long Tail can be boiled down to this: Our culture and economy are increasingly shifting away from a focus on a relatively small number of hits (mainstream products and markets) at the head of the demand curve, and moving toward a huge number of niches in the tail. In an era without the constraints of physical shelf space and other bottlenecks of distribution, narrowly targeted goods and services can be as economically attractive as mainstream fare.

The long tail is going to be “as economically attractive” as the head of the tail. That’s what he’s saying, right? If so, then he is wrong, for the reasons described above.

But maybe that isn’t what he’s saying. Consider:

This is why I’ve described the Long Tail as the death of the 80/20 Rule, even though it’s actually nothing of the sort. The real 80/20 Rule is just the acknowledgment that a Pareto distribution is at work, and some things will sell a lot better than others, which is as true in Long Tail markets as it is in traditional markets. What the Long Tail offers, however, is the encouragement to not be dominated by the Rule. Even if 20 percent of the products account for 80 percent of the revenue, that’s no reason not to carry the other 80 percent of the products. In Long Tail markets, where the carrying costs of inventory are low, the incentive is there to carry everything, regardless  of the volume of its sales. Who knows—with good search and recommendations, a bottom 80 percent product could turn into a top 20 percent product.

Here he seems to be saying that the 80/20 Rule will always remain true, but that shouldn’t stop us from realizing how important the long tail is in our lives, and how much more important it will be in the future as we get ever more diversity of choices in the relatively niche. Moreover, companies should continue to extend their long tail offers because, at any moment, one of them might suddenly jump to the head of the tail. So a Kindle book that’s only selling a handful per year may suddenly go viral and make Amazon a ton of money.

If that’s what he believes, then he is correct. But the mixture of the bad accounting of the sort in the top 100,000 books example above, statements such as the one quoted above about what “the theory of the Long Tail can be boiled down to”, and this last quote about the 80/20 rule, force me to conclude that Anderson’s thinking is simply muddled on this particular point.

Credit Where Credit is Due

Finally, if there’s one thing that I think we can all agree with Anderson on, it is that the expansion of the long tail has greatly increased the quality of our lives. Whether it’s people like Scott Sigler who has managed to make a living from his fans, or the passionate community of a small subreddit, there is an ever expanding virtual ocean of choices in the long tail today.

Chris Anderson argued that the fact that something is not a hit of the blockbuster variety does not mean it is a miss. There are some things that are much more valuable to a small group of people than they are to everyone else, thereby precluding their ability to become a blockbuster. There are also some things that might be equally appealing to the same number of people as a blockbuster, but they simply were not lucky enough to be among the few that won that particular lottery.

All of us live in both the head of the tail and the long tail, and I’m glad that Anderson convinced so many of the value of the latter.

The Collision of the Personal and the Professional

BLOGS VS MAINSTREAM MEDIA…FIGHT!!

Eight years ago, when I was a pretentious, know-it-all 19-year-old, the conversation about new media was dominated by the rhetoric of bloggers and journalists, citizen and mainstream media. I had seen the blogosphere call out Dan Rather for running with forged documents as evidence. I learned of the role they played in making sure Trent Lott’s statements saw the light of day.

As far as I was concerned, newspapers and news outlets in general were old hat on their way to extinction, and blogs were the future.

What did I think this meant?

It meant that newspapers would unbundle. It meant that articles on the Iraq War or science features written by journalists with little background in the subject matter would be replaced by people living in Iraq, and actual scientists, who would have blogs. This wasn’t all in my head–such blogs existed and have only grown more numerous.

My thoughts on whether anyone would make money on this new way of things, and how, went back and forth. But I thought the future looked more like Instapundit and Sandmonkey than like The New York Times and The Washington Post.

As I have witnessed the evolution of the web over the years, aged to a point beyond a number ending in -teen, and followed the conversation and research on new media, my point of view has changed–to say the least.

It’s not simply that it was wrong, but that it was far too narrow. It has not only become clear that professional media, in some form, is here to stay. It has also become clear that the old blog vs mainstream media perspective misses the big picture.

What has happened is that many activities that we conducted in our personal lives have moved online; they have become digital and they have become some approximation of public. This has big implications for other people’s professions–one tiny corner of which is the impact that personal blogs have had on professional media. But it also has an impact on our own professional lives.

In short, the personal and the professional are colliding on a number of fronts. How this collision will play out is an open question.

THE PERSONAL BECOMES PUBLIC

The vast majority of my conversations with nearly all of my friends and family occur in a digital format. It happens on Twitter, Facebook, and Tumblr. It happens in email, in text messages, and in Google Talk chat windows. A very large proportion of this is public or semi-public.

I also enjoy writing about subjects that I’m thinking about. For that reason, I’ve maintained a blog in one form or another since 2004. I have never made one red cent off of my blogging. It has always been something I’ve done out of enjoyment of the writing itself.

Before the Internet, my writing would undoubtedly have been relegated to the handful of friends I could strong-arm into looking at some copies I made for them. I certainly wouldn’t be able to ask this of them on a very regular basis, so most of my writing would have remain unread–or, discouraged, I would have written a lot less.

The thing I enjoyed about blogging from the beginning was that it provided me with a place to put my writing where people could find it, without me having to make the imposition of bringing it to them. However, translating this private analogue activity into a public and digital one has implications beyond this simple convenience.

For one thing, it makes it possible for me to connect with new people who share my interests from anywhere in the world. It can also have implications for my professional life. If I write something insulting about my coworkers, or, say, something extremely racist, odds are it could get me fired and possibly have an impact on my long-term employability.

Conversely, just as I can discover and be discovered by new friends, I can also discover and be discovered by people who might provide me with a career opportunity–and indeed this happened to me earlier this year.

When enough enthusiasts move online in this manner, it begins to have consequences for the world of professional writing in general. One lone guy blogging about a few esoteric subjects isn’t going to have much of an impact. Over 180 million people writing about everything under the sun will have some serious implications. If we take Sturgeon’s Law at face value and say that you can throw 90 percent of that in the garbage, we’re still talking about tens of million of people writing pieces of average to excellent quality.

This is a dramatic expansion in the supply of written works. This has understandably made professional producers of written words sweat more than a little. One way of looking at this is from the old blog vs mainstream media perspective. A better way to look at it is from the understanding that any professional content outlet is going to have to adapt to the new reality of personal production if they want to survive.

That process of adaptation has been messy and is still ongoing.

THE PROFESSIONAL BEGINS TO ADAPT

What my 19-year-old self did not realize is that the media business has never really sold information. It has sold stories, it has sold something for groups to rally around and identify themselves with or against. There is still money to be made by selling this product. Clay Johnson has documented some methods that he finds vile, but there are plenty of perfectly respectable ways to do it as well.

Take The Verge–a technology site that launched last year. It does not suffer from the baggage of a legacy business–it was born online and lives online. It was created by a group writers from Engadget, another professional outlet that was born on the web, who thought they could do better on their own. I have argued that their initial success was made possible in part by the fact that the individual writers had built up a community around them, through their podcast and through their personal Twitter accounts.

The Verge invests a lot in building its community. The content management tools it offers in its forums are, they claim, just as powerful as the tools they themselves use to write posts. They frequently highlight forum posts on their main page. Their writers engage with their readers there and on various social media.

Another way that the professional world has adapted is by treating the group of unpaid individuals producing in their space as a sort of gigantic farm system for talent and fame. This system is filled with simple enthusiasts, but also includes a lot of people consciously trying to make the leap to a career in what they’re currently doing for free. Either way, a tiny fraction of this group will become popular to varying extents. Rather than competing with this subset, many existing professional operations will simply snap these individuals up.

Take Nate Silver, the subject of much attention this election cycle. He started writing about politics in a Daily Kos diary, then launched his own blog on his own domain. Eventually, this was snapped up by The New York Times. The article on this is telling:

In a three-year licensing arrangement, the FiveThirtyEight blog will be folded into NYTimes.com. Mr. Silver, regularly called a statistical wizard for his political projections based on dissections of polling data, will retain all rights to the blog and will continue to run it himself.

In recent years, The Times and other newspapers have tapped into the original, sometimes opinionated voices on the Web by hiring bloggers and in some cases licensing their content. In a similar arrangement, The Times folded the blog Freakonomics into the opinion section of the site in 2007.

Forbes did this with Modeled Behavior; Andrew Sullivan’s Daily Dish has done this with The Atlantic and now The Daily Beast. In publishing, Crown did this with Scott Sigler, and St. Martin’s Press did this with Amanda Hocking.

Suffice to say, these markets continue to be greatly disrupted. However, I do not think the adapted, matured versions of these markets will involve the utter extinction of professional institutions.

YOU GOT YOUR PROFESSIONAL IN MY PERSONAL

I consider my Twitter account to be extremely personal. No one is paying me to be there. With a handful of exceptions, I don’t have any professional relationships with the people I follow or am followed by there.

But there are definitely people who I feel have followed me because of some notion that it might help their career. Not because I’m some special guy who’s in the know, but because they think, say, that following everyone who seems to talk a lot about social media will somehow vaguely translate into success in a career in that industry. A lot of people who consider Twitter a place for human beings to talk to one another as private individuals have a low opinion of such people.

But I cannot deny that I have, on occasion, used Twitter to my professional advantage. And it’s not as though there’s a line in the sand for any of these services stating FOR PERSONAL USE ONLY. It’s difficult for journalists of any kind to treat anything they say in public as something that can be separated from their profession. I have seen some create distinct, explicitly labeled personal Twitter accounts, with protected tweets. Of course, Jeff Jarvis would point out that they are merely creating another kind of public by doing so.

Moreover, more and more services we use in our personal lives are having implications for our employers. How many of us have had an employer ask us to “like” the company page on Facebook? Or share a link to a company press release? These services are far too new for us to have expectations set about them. Is this overstepping the boundaries of what is acceptable, or is this a legitimate professional responsibility we have to our employers?

In a world where a personal project or an answer on Stack Overflow can be added to your resume when applying for a job, the line between personal and professional is not quite as sharp as it used to be.

Take Marginal Revolution as an example. Is it a personal or a professional blog? Certainly Tyler Cowen and Alex Tabarrok are not paid to write what they post. But they are using the blog as a venue for participating in the larger conversation of the economics profession. Of course, they also post on any number of specific subjects that catch their interest. It is both a platform to promote their books, as well as to solicit advice from their readers on what restaurants to check out when they are traveling.

Are categories like “personal” or “professional” even useful for describing things like Marginal Revolution? Is it an exceptional case, or–its particular level of popularity set aside–is it the new normal?

Fanboy Politics and Information as Rhetoric

News has to be subsidized because society’s truth-tellers can’t be supported by what their work would fetch on the open market. However much the Journalism as Philanthropy crowd gives off that ‘Eat your peas’ vibe, one thing they have exactly right is that markets supply less reporting than democracies demand. Most people don’t care about the news, and most of the people who do don’t care enough to pay for it, but we need the ones who care to have it, even if they care only a little bit, only some of the time. To create more of something than people will pay for requires subsidy.

-Clay Shirky, Why We Need the New News Environment to be Chaotic

There are few contemporary thinkers that I respect more on matters of media and the Internet than Clay Shirky, but his comment about how much reporting “democracies demand” has bothered me since he wrote it nearly a year ago now. I think the point of view implied in the quoted section above misunderstands what reporting really is, as well as how democracies actually work.

To understand the former, it helps to step away from the hallowed ground of politics and policy and focus instead on reporting in those areas considered more déclassé. The more vulgar subjects of sports, technology, and video games should suffice.

Fanboy Tribalism

One of the most entertaining things about The Verge’s review of the Lumia 900 was not anything editor-in-chief Joshua Topolsky said in the review itself. No, what I enjoyed most was the tidal wave of wrath that descended upon him from the Windows Phone fanboys, who it seemed could not be satisfied by anything less than a proclamation that the phone had a dispensation from God himself to become the greatest device of our time. The post itself has over 2,400 comments at the moment I’m writing this, and for weeks after it went up any small update about Windows Phone on The Verge drew the ire of this contingent.

The fanboy phenomenon is well known among tech journalists, many of whom have been accused of fanboyism themselves. It’s a frequent complain among the Vergecast’s crew that when they give a negative review to an Android phone, they are called Apple fanboys, when they give a negative review to an Windows Phone device, they are called Android fanboys, and so on.

To the diehard brand loyalist, the only way that other people could fail to see their preferred brand exactly the same way that they see it is if those other people have had their judgment compromised by their loyalty to some other brand. So Joshua Topolsky’s failure to understand the glory that is the Lumia 900 stems from the fact that he uses a Galaxy Nexus, an Android device, and his Android fanboyism makes it impossible for him to accurately judge non-Android things.

There came a certain moment when I realized that fanboy tribalism was a symptom of something innate in human nature, and that you saw it in every subject that had news and reporting of some sort. It may have become cliché to compare partisan loyalty with loyalty to a sports team, but the analogy is a valid one. Just as there are brand fanboys, there are sports team fanboys and political party fanboys.

Back in Middle School, I got really wrapped up in this–as a Nintendo fanboy. I had a friend that was a really big Playstation fanboy, and we had the most intense arguments over it. I don’t think I’ve ever had arguments that got as ferocious as those since–not about politics, not about morality, not about anything. We would each bring up the facts we knew that each of us thought should have made it obvious which console was superior and then get infuriated that the other side didn’t immediately concede defeat. I personally always came prepared with the latest talking points from Nintendo’s very own Pravda, Nintendo Power Magazine.

Cognitive Biases and Group Dynamics

Cognitive science has a lot to say about why people act this way. A lot of progress has been made in cataloging the various biases that skew how human beings see the world. Acknowledging that people have a confirmation bias has become quite trendy in certain circles, though it hasn’t really improved the level of discourse. My favorite trope in punditry these days is when one writer talks about how a different writer, or a politician they disagree with, can’t see the obvious truth because of their confirmation biases. Ignoring the fact that the writer himself has the very same bias, as all humans do!

Most of the discussion around cognitive biases centers on how they lead us astray from a more accurate understanding of the world. The more interesting conversation focuses on what these biases emerged to accomplish in the first place, in the evolutionary history of man. The advantages to cementing group formation in hunter gatherer societies is something that has been explored by moral psychologist Jonathan Haidt in his recent book The Righteous Mind. Arnold Kling has an excellent essay where he applies Haidt’s insights to political discourse.

The fact is that even in our modern, cosmopolitan world, we human beings remain a tribal species. Only instead of our tribes being the groups we were born among and cooperate with in order to survive, we have the tribe of Nintendo, the tribe of Apple, and the tribe of Republicans.

When the Apple faithful read technology news, they aren’t looking for information, not really. They’re getting a kind of entertainment, similar to the kind that a Yankee fan gets when reading baseball news. Neither have any decision that they are trying to inform.

Political news is exactly the same. When a registered Democrat reads The Nation, we like to think that there is something more sophisticated going on than our Apple or Yankee fan. But there is not. All of them might as well be my 13-year-old self, reading the latest copy of Nintendo Power. The Democrat was already going to vote for the Democratic candidate; it doesn’t matter what outrageous thing The Nation article claimed that Republicans were doing lately.

Information as Rhetoric

I think that the fear that there might not be enough truth-seekers out there fighting to get voters the salient facts about the rich and power is misplaced for a few reasons. For one thing, in this day and age, it is much easier to make information public than it is to keep it secret. For another, it is rarely truth-seekers that leak such information–it is people who have an ax to grind.

The person that leaked the emails from the Climate Research Unit at the University of East Anglia wasn’t some sort of heroic investigative journalist type with an idealistic notion of transparency. They were undoubtedly someone who didn’t believe in anthropogenic global warming, and wanted to dig up something to discredit those who did. He was a skeptic fanboy, if you like, and he wanted to discredit the climate fanboys.

The people that get information of this sort out to the public are almost always pursuing their own agendas, and attempting to block someone else’s. It’s never about truth-seeking. That doesn’t invalidate what they do, but it does shed a rather different light on getting as much information as “democracies demand”. Democracies don’t demand anything–people have demands, and their demands are often to make the people they disagree with look like idiots and keep them from having any power to act on their beliefs.

To satisfy either their own demands or that of an audience, some people will pursue information to use as a tool of rhetoric.

How Democracies Behave

Let us think of this mathematically for a moment. If information is the input, and democracy is the function, then what is the output?

I’m not going to pretend to have a real answer to that. There’s an entire field, public choice, with scholars dedicating a lot of research and thought to understanding how democracies and public institutions in general behave and why. My father has spent a lot of time thinking about what impact information in particular has on political and social outcomes. I am no expert on any of these subjects, and will not pretend to be.

I will tentatively suggest, however, that people do not vote based on some objective truth about what benefits and harms us as a society. I think people vote based on their interests. That is, narrow material interest–such as whether a particular law is likely to put you out of work of funnel more money your way. But also their ideological or tribal interest–whether it advances a cause you believe in, or a group you consider yourself a part of.

So I don’t really see a reason to insist on subsidizing journalism. All that will accomplish is bending those outlets towards the interests of the ones doing the subsidizing.

Publicness and the Modern Career

This week, I accepted a new job. It will be a big change for me–among other things, I will be leaving the DC area and moving to New York. I may talk about that in more detail some other time. For now, I’d like to focus on how a blog post I wrote three years ago and a recent connection on LinkedIn made it possible for me to get this job in the first place.

Writing

I love to write, and I always have. It is one of the few true constants across my entire life. These days, anyone with a love for writing should be putting their work online. If you are already going to be investing the time and energy to write something, you might as well put it where people might conceivably find it. At minimum, it makes it easier to share with friends and family who are geographically scattered. At best, you open yourself up for a lucky break.

I have been blogging since November of 2004, when I was 19 and my interests were primarily politics and philosophy. I had been writing online in one form or another for years before that, but it mostly involved arguing about religion or video games or whatever I happened to feel strongly about at the time in various forums. Blogging was different; it became my method of choice for thinking out ideas through writing.

The early stuff I wrote was variably juvenile or pretentious, or both, but the mere act of doing it helped me to get better over time. The more I did it, the more I found I enjoyed it, and the easier it came to me.

In late 2008 I started a new blog specifically for longer analytical pieces on technology and new media. I wanted a blog that I could point potential employers to without the risk that while they were there they might stumble into some dumbass thing I had written when I was 19. They would have to go to the extra effort of googling me to do that!

Medialets

Back in 2009, an app analytics company called Pinch Media released a slideshare presentation based on data they had on iPhone app usage. It went viral, briefly becoming the talk of the tech blogosphere, and even getting a nod from the Onion. I took issue with the way the angle tech blogosphere coverage was approaching it from, and also with how some of the data was presented in the slideshare itself.

So I wrote a critique on my blog, fully expecting that it would only be seen by the handful of friends and family who usually read my posts. Shortly after posting it, the analyst for Pinch Media jumped in with some salient remarks in the comments section, which was a fun surprise.

Then I was contacted by someone from a company called Medialets, which at the time was one of Pinch Media’s competitors in the mobile analytics space. Rana, one of their cofounders, asked if I would be open to talking on the phone.

We talked, and she floated the idea of maybe having me work with them on a project by project basis. It was definitely more interesting than the job I had at the time. But I did have a job, and grad school, and a girlfriend in DC, and family and friends in northern Virginia. I spent a lot of hours driving between all of them, with very little free time left afterwards. So I was interested, but I didn’t follow up, and they didn’t either. I followed Rana and Eric Litman, Medialets’ CEO, on Twitter. After a while, I noticed that Rana had left the company for other ventures, so I assumed I was unlikely to have any dealings with them in the future.

Then, a little over a month ago, Google’s Bradley Horowitz connected to me on LinkedIn for reasons that remain a mystery to me. But I figured he was probably connected to some interesting people, so I looked. I saw Eric, and remembered him from my previous encounter with his company, and thought–why not? So I connected with him.

In the time since my last interaction with Medialets, I had added an MA in economics and a job in online ad operations to my resume. It just so happened that they were looking for someone to work in ad operations, so Eric reached out to me about a job.

Living in Public

New media is not a panacea; it still takes experience and education to qualify for a job, and that isn’t going to change. But your ability to do a job is far from the only thing that determines whether or not you get it. To start with, your potential employer has to know you exist.

Jeff Jarvis has recently championed the benefits of living in public, and one of those benefits is definitely that it creates the opportunity to be discovered. If the kind of work you want to do involves skills that can be demonstrated online, you should be demonstrating them.

But there is more to what a person would be like as an employee than what skills they have. One of the benefits of the various social networks we’re on is that people can get a feel for our personalities over time. While this may not be a perfect indication of what we would be like to work with, I think it’s fair to assume that everyone prefers to work with people they like rather than people they don’t. If you have a blog that puts your skills and personality on display then you are creating the possibility that someone will grow to like you, someone who either has a job you are qualified for or knows someone that does.

Take Eli for example. His blog just oozes social science smarts. If you are looking for a young, brilliant economist, reading his blog should be enough to convince you that he’s your guy. Moreover, you really get a sense of what his interests are, as well as of his sense of humor. I had a class with him years ago, but I really came to know his personality afterwards, by talking to him on Twitter and reading his posts.

Jeff Jarvis thinks that we are in the midst of a moral panic about privacy concerns, and I tend to agree. The privacy conversation is an important one, and we need to have it, but we should be very careful not to undervalue what each of us can get from moving more of ourselves into the public.