Embracing Performance: Some Thoughts on Clubhouse

And now for something a little different. I’d been skeptical of the hype around Clubhouse, as I’m old enough to remember many waves of social media hype in my lifetime. The reality is invariably more prosaic.

That said, the reality in question has led me to many deep friendships and I dare say to community. I’m not so jaded as to think the way we shape our online interactions is unimportant. And as I took a closer look at Clubhouse, I began to notice some interesting things about it.

A lot of attention has been paid to the fact that Clubhouse is audio-only, making it the first among social media platforms that organize themselves around individual user feeds. But this wasn’t all that interesting to me. From my perspective, it is simply inconvenient; I have a day job, I have small kids, and I live in a two bedroom apartment. There are few opportunities for me to simply hang out and yak out loud where I’m not bothering someone. This is the main reason I have not been much involved in podcasting.

No, what interested me are threefold:

  • Users do not see a feed of posts, but a feed of “rooms” that are currently active and contain someone they follow or are run out of a “club” they follow.
  • The rooms are divided between the “stage” of people who can talk and the audience of people who cannot.
  • Some subset of the people on the stage have the authority to add people from the audience to the stage, or send people from the stage to the audience.

The tl;dr of my thoughts here are that this is a straightforward embrace of the nature of social media conversations as performance, and designing around that basic assumption is in fact a good thing. Letting any user create a room provides the basic openness of any other social media platform. Giving them a say over who gets to be on the “stage” empowers them to create the conditions for high quality performances.

Many current and previous iterations of social media play this game where you’re encouraged to act like you’re in some kind of small group setting where everyone knows one another and can be at ease. Twitter was just for posting about what you just had for breakfast, Facebook is just for your IRL friendships, and so on.

But to the extent that this ever approximated reality (it never did) it did not survive the massive scaling up of these platforms. Even platforms built with the assumption that moderation would be needed, like forums or closed Facebook groups or chat rooms, scale about as well as comment section moderation. Which is to say, not very well at all.

Clubhouse seems to have a model that could scale well. Of course, it does this by allowing for a stage that can’t really accommodate all that many speakers and connecting it to an audience that can grow bigger than the biggest forum.

That’s the tl;dr. The rest looks at the question of social media performance in a little more depth.

Fake intimacy

When I was in high school, a lot of my friends had LiveJournals. LJ was both my first blog and my first social media, unless one counted forums and chatrooms (which typically are not categorized that way these days).

The way LiveJournal (which, amazingly, still exists) worked is that you had the reverse-chronological feed of your own entries, but then you also had a tab with a reverse chronological feed of all the LJs you followed. For me at the time, this was almost entirely classmates.

There was this weird thing where we kind of pretended that it was an actual, private journal. It’s not like there was an explicit agreement that that was how one ought to write, in fact to have said it out loud would have invited mockery. But there’s no denying that that’s what everyone did, stylistically; they wrote like they were only writing for themselves. Only they very obviously were writing for other people, and the social aspect of LJ was baked right in.

What this meant was you got a lot of passive aggressive signaling and indirect reference, the subtweet before Twitter, the vaguebook before Facebook. With all the subtlety and grace that one would expect of a bunch of teenagers (not that adults have proven much better in this regard).

Posting on LiveJournal was a public act, inherently. You were writing for an audience, whether the one you had or the one that you wanted. You were absolutely not just writing for yourself, or else you’d just write in an actual paper, with actual ink, where no one would be able to access your private words from an Internet connected computer anywhere in the world.

Instead, the tantalizing experience a teen wished to replicate was the trope from fiction wherein someone, but usually the love interest, would discover the main character’s diary. The private thoughts contained therein revealed that someone nominally seemed superficial, or a jerk, or stoic, actually had a great depth of character and was acting from highly admirable motives.

But of course, this trope does not actually work if the main character wrote those things expecting that the diary would be found. Indeed, if they did, then far from being admirable, they would be contemptible, engaging in a kind of emotional manipulation.

Our teenage LJ antics did not rise to the level of emotional manipulation, because they were far too transparent, even to teenagers. But this dynamic, where we pretend that a public platform is anything but that, has remained pervasive as other forms have taken off.

I had an on again, off again relationship with Facebook for years, because I was convinced it was simply going to be high school LJ all over again. And who would deny that, to some extent, it has been? But what finally convinced me was being a groomsman in a friend’s wedding in 2008, after which the wedding party all friended one another (if we weren’t friends already) and shared pictures of the event. The event itself was a lot of fun and it was very fun to continue feeling that sense of a shared experience with people afterwards.

Nevertheless, people (and I’m no more innocent of this than anyone) still talk out of both sides of their mouths on these platforms. When someone jumps on something we post, we’re “just” tweeting or “just” saying something on Facebook, as if it were a private space, or even a small intimate gathering incapable and statements will not reach the ears of anyone outside of the physical room.

This is very silly. Even a private Twitter account and a friends-only Facebook post are speaking to publics. We need to have more acceptance of this basic fact, of the basic role of performance has in our lives in general, on and off social media.

Structuring the stage

There are, of course, intimate conversations, even on the Internet. DMs, mobile messaging services, emails, a chats can all be done on a one-to-one, or few-to-few basis.

At the other end you get extreme one-to-many scenarios; an article at a popular publication or a post at a popular blog, perhaps without a comment section. A recorded video or audio segment disseminated over Netflix or YouTube or a podcast app.

The thing that is both tricky and interesting about modern social media platforms is that they can be anywhere on this spectrum, for any one person, and change very suddenly. You may go to Twitter mainly for the 20 or so people you follow and who mostly follow you back. Or you may go to Twitter to read or reply to accounts who have millions of followers. Or you may have millions of followers yourself and treat the account as mostly a broadcast platform.

Moreover, you may have a small to midsized follower base and yet have one of your tweets go massively viral, resulting in millions of views and hundreds of replies. In other words, you might expect your public to be relatively intimate or at least predictable, and then find that you’re broadcasting to an enormous audience.

I think that most people who tweet, whatever their follower count, are aware of the possibility of a viral tweet. The style of their writing reflects this just as the style of those old LJ entries reflected the awareness that someone was or could be reading.

While we all like having our spaces that aren’t going to suddenly become broadcast platforms on a dime, there isn’t actually a shortage of those; as I mentioned, there are plenty of ways to engage in one to one or few to few communication these days. The fact that Twitter and similar platforms cannot really give you that isn’t a knock against them per se, it’s just something to take into account. If the particular niche that Twitter fills is not for you, then best stay off it or only tweet anodyne things.

The problem isn’t the virality of Twitter. The problem is that the user experience of going viral or having a large audience sucks. This isn’t meant to elicit sympathy, it’s an observation about Twitter’s design: once a large number of people like, retweet, and reply to your tweets, the notifications tab becomes essentially unusable. As someone with only a modest follower base myself I only see this on the occasional tweet of mine that gets a bit more attention, and each time I’m left wondering how the big accounts manage that kind of engagement on a regular basis. How do you not simply end up precluding any kind of two-way communication at all, tweeting only to broadcast some statement to an audience and little else? And yet I talk to people with accounts that size all the time; they somehow seem to manage it.

That they manage it doesn’t make the design good, though. The notification tab is fundamentally designed for the median tweet by the median user of Twitter, who has a low follow count and low engagement. But by definition this means that it’s not designed for the users who are the biggest draw to the platform for most users.

I don’t think Twitter set out to be a platform that had to balance one-to-one and one-to-many all in the same interface. Its founders had the vague idea that people just liked passively sharing what they had for breakfast with their friends, and went from there. They’ve made an enormous number of changes to adapt the platform to how it is actually used but none of them address the usability problem described above.

Clubhouse, on the other hand, seems better designed to strike this balance. Users with small follower counts can start rooms that their friends join and have relatively few-to-few conversations. Like closed Facebook groups, they can even create closed, invite-only rooms. But there’s never going to be a situation where their room goes massively viral and they’re unable to manage it, because going viral just means that the audience balloons but the stage does not. The number of people talking, in other words, remains the same, unless the person who created the room (or the people they gave mod authority to) consciously choose to add more.

In the spectrum from one-to-one, few-to-few, and few-to-many, a user’s Clubhouse feed is kind of like a social media feed of Skype group calls mixed in with podcasts and conference panels that have live audiences. But I think this works better at managing all ends of the audience size spectrum than other social media platforms currently do. No matter how big your audience balloons into being, it isn’t going to impact the usability of the app for you, nor is it going to interrupt the conversation you’re having in any way.

And I think there’s more that they can do within this framework. There’s a hard ceiling on how many people you can have on stage and still hope to have a meaningful conversation. However, I think this could be extended somewhat by giving moderators more tools. For example, allowing a moderate to set a queue for who speaks when, as well as time limits before someone is automatically muted so the next person can go. And just as members of the audience can “raise their hand” to ask to be added onto the stage now (a feature that can be turned off in a given room), people on stage might be able to raise their hand to ask the moderator to let them jump their position in the queue, based on something that was just said.

Little things like that can go a long way. But of course you need a skilled moderator to be able to run a large stage that way, so in most cases I expect small stages would still dominate over large ones.

But note that all of this is only possible by dividing rooms into speakers and listeners, stages and audiences. This is an explicit acknowledgement, right in the design, of the fact that these platforms are used for performing to publics. It’s not hard to design for small group conversations. The challenge is designing a platform that can handle both, the way modern social media platforms have to be able to. Clubhouse does this by building around the few-to-many scenario in a way that doesn’t make it hard to use for few-to-few scenarios.

I expect we’ll see more of this in the near future, and not confined to audio-only mediums. And I take it as a good sign, a sign of maturation in the ecosystem.

In Praise of Blogosphere

In 2004 I jumped into the world of blogging in a big way, both in the sheer amount that I read on a daily basis and my personal output in a widely-unread blog with a name only a pretentious 19-year-old could come up with. At that time, Very Serious Person that I was, I hated the term “blogosphere”. At a time when I was angrily arguing that the Mainstream Media was overrated and bloggers were the future, “blogosphere” seemed awkward and embarrassing. I tried to avoid using it, instead resorting to things like “blog ecosystem”. In the end, I relented, because it was clear that blogosphere was here to stay, and it began to feel even more awkward to be the only one not saying it.

Nine years is a long time in the cycle of media storytelling, to say nothing of technology and technological adoption. Nowadays you’ll still get the occasional scare piece to the tune of “Jesus Christ the Internet is nothing but one, big, angry mob of wide-eyed vigilantes!” but these are at least as likely to cover people’s activities on Twitter and similar social media as on blogs. For the most part, the role of the blog has been cemented and matured, within a larger (dare I say it?) ecosystem of social interactions and media platforms.

There is greater appreciation for the fact that a blog is nothing but one part of the greatly lowered barriers to entry into producing public content, and that non-professionals can and do contribute a great deal to the public conversation every day. Some of them have aspirations of becoming professional contributors to this conversation, but many do not.

As perceptions and usage of the blog have matured, there has been an increasing allergic reaction to some of the rhetoric of the early adopters. More than once I have seen friends I follow on Twitter complain about the term blogosphere and wish that its usage would cease.

I want to defend the much maligned blogosphere, and not just on the (very valuable) rule of thumb that if 19-year-old Adam Gurri believed it, there was probably something crucially wrong about it. Blogosphere was a term coined and adopted by people who were sick of the modes of conversation inherited by modern media from our mass media past. Bloggers who wrote about new media in the first half of the last decade were sick of bad fact-checking and baked in moral assumptions being hidden under the veil of a style of fake objectivity. Most of all, they were sick of people taking themselves too damned seriously.

That is why blogs writing about rather serious topics nevertheless took on silly or offensive names such as Instapundit or Sandmonkey. It’s why many posts that had ever increasing weight in the public discussion used an inordinate amount of profanity to make their points.

The equilibrium has shifted since then; now there are a greater number of professional outlets that have adapted their rhetoric to be less stilted and less objective, if still intended to be respectable. And the blogs that carry weight have, in my subjective perception, seemed to tone down the juvenile naming conventions and swearing in posts, to a certain extent.

Nevertheless, I like blogosphere because it has that overtly geeky, tongue in cheek side to it that I think is unlikely to become irrelevant in my lifetime. We could all stand to take ourselves a little less seriously.

Rereading The Long Tail

This was officially launch week for The Umlaut, a new online magazine that my friends Jerry Brito and Eli Dourado have started. There are five of us who will be regular writers for it. For my first piece, I thought it might be fun to go back and re-examine The Long Tail almost seven years after it was published.

The Long Tail had a big impact on the conversation around new media at the time, and was very personally significant. The original article was published in October of 2004, a mere month before I began blogging. Trends in new media were a fascination for me from the beginning, so I kept up with Chris Anderson’s now-defunct Long Tail blog religiously.

A 19-year-old and a tad overenthusiastic, I strongly believed that the mainstream media was going the way of the dinosaur and would be replaced by some distributed ecosystem of mostly amateur bloggers. In short, I thought the long tail was going to overthrow the head of the tail, and that would be that. Moreover, I thought that all content would eventually be offered entirely free of charge.

That was a long time ago now, and my views have evolved in some respects, and completely changed in others. I think that the head of the tail is going to become larger, not smaller, and professionals are here to stay–as I elaborate on here. However, I do think that the growth of the long tail will be very culturally significant.

When I began rereading The Long Tail, I expected to find a clear argument from Anderson that he thought the head of the tail would get smaller relative to the long tail. Instead, he was frustratingly vague on this point. Consider the following quote:

What’s truly amazing about the Long Tail is the sheer size of it. Again, if you combine enough of the non-hits, you’ve actually established a market that rivals the hits. Take books: The average Barnes & Noble superstore carries around 100,000 titles. Yet more than a quarter of Amazon’s book sales come from outside its top 100,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is already a third the size of the existing market—and what’s more, it’s growing quickly. If these growth trends continue, the potential book market may actually be half again as big as it appears to be, if only we can get over the economics of scarcity.

Let us unpack this quote a little.

First, Anderson is offering the fact that more than 25% of Amazon’s book sales occur outside of its top 100,000 titles as evidence of the revenue potential for the long tail. But this is very flawed conceptually. At the time of the book’s publication, Amazon sold some 5 million books. If nearly all of the additional revenue beyond the top 100,000 titles was encompassed by the following 100,000 titles, then 4% of Amazon’s titles account for nearly all of its book revenues. And there is good reason to believe that that is exactly how the distribution played out, back then and now.

The fact that 200,000 is a larger number than 100,000 is indeed a significant thing; it shows the gains that a company can make from increasing their scale if they are able to bring down costs enough to do so. But to  claim that this is evidence of the commercial potential of the long tail is flat out wrong. We’re still talking about a highly skewed power law distribution–in fact, an even more skewed power law distribution, as we used to speak of 20% of books accounting for 80% of the revenue, and here we are talking about 4% of the books accounting for something on the order of 99% of the revenue.

This argument appears several times throughout the book, in several forms. At one point he talks about how the scaling up of choices makes the top 100 inherently less significant. Which is true, but it does not make the head of the tail any less significant; it just means that there are a larger quantity of works within that head.

Second, this bit about “if only we can get over the economics of scarcity.” Anderson argues, repeatedly, that mass markets and big blockbusters are an artifact of a society built on scarcity, and the long tail is a creation of the new economics of abundance. This is wrong to its core.

As I argue in my first piece at The Umlaut, we have been expanding the long tail while increasing the head of the tail since the very beginning of the Industrial Revolution. Scale in the upward direction fuels scale in the outward direction. Consider Kevin Kelly’s theory of 1,000 true fans, the paradigm of the long tail success.

Assume conservatively that your True Fans will each spend one day’s wages per year in support of what you do. That “one-day-wage” is an average, because of course your truest fans will spend a lot more than that.  Let’s peg that per diem each True Fan spends at $100 per year. If you have 1,000 fans that sums up to $100,000 per year, which minus some modest expenses, is a living for most folks.

Now ask yourself: how do we get to a world where someone can make a living by having 1,000 true fans, or fewer? Or 1,000 more modest fans, or fewer?

One way we get to that world is through falling costs. If we assume a fixed amount that some group of fans is willing to pay for your stuff, then progress is achieved by lowering the cost of producing your stuff.

Another way is for everyone to get wealthier, and thus be able to be more effective patrons of niche creators. If I make twice as much this year as I did last year, then I can afford to spend a lot more above and beyond my costs of living.

Another conceivable way is sort of a combination of the first two–falling costs for the patrons. If I make as much in nominal terms as I did last year, but my costs of living fall by half, then it is effectively the same as though I had doubled my income.

Put all three of these trends together and you have perfectly described the state of material progress since the onset of the Industrial Revolution. Huge breakthroughs in our productive capacities have translated into a greater ability to patronize niche phenomena.

Obviously the personal computer and the Internet have taken this trend and increased its scale by several orders of magnitude–especially in any specific area that can be digitized. But that doesn’t mean we’ve entered a new era of abundance. The economics are the same as they have always been. The frontier has just been pushed way, way further out.

Moreover, the blockbuster is not an artifact of scarcity. Quite the opposite. The wealthier and more interconnected we are, the taller the “short tail” can be. In my article, I mention the example of Harry Potter, which was a global hit on an unprecedented scale (this Atlantic piece estimates the franchise as a whole has generated something like $21 billion). Hits on that scale are rare, giving us the illusion at any given moment that they are a passing thing, a relic of a bygone era of mass markets. But the next Harry Potter will be much, much bigger than Harry Potter was, because the size of the global market has only grown and become more connected.

Consider Clay Shirky’s observation that skew is created when one person’s behavior increases the probability that someone else will engage in that behavior “by even a fractional amount”. His example involves the probability that a given blog will get a new reader, but it extends to just about every area of human life. And the effect he describes, but does not name, is the network effect–one additional user of Facebook increases the probability that they will gain yet another one, one additional purchaser of a Harry Potter book increases the probability that yet another person will purchase it.

And we know, from the diffusion of innovations literature, that there comes a certain point at which one additional person increases the probability by a lot more than a fractional amount. As Everett Rogers put it:

The part of the diffusion curve from about 10 percent adoption to 20 percent adoption is the heart of the diffusion process. After that point, it is often impossible to stop the further diffusion of a new idea, even if one wished to do so.

Now, if network effects are what create skew in the first place, and we are living in the most networked age in history, how plausible does Anderson’s argument seem that the head of the tail will be of decreasing significance because of new networks?

What Does He Really Think?

Part of what’s frustrating about the book is that Anderson doesn’t really make a solid claim about how big he thinks the head of the tail is going to be relative to the tail. He provides some facts that are erroneous to answering this question, such as the Amazon statistic described above. In some places he seems like he’s saying the head will be smaller:

The theory of the Long Tail can be boiled down to this: Our culture and economy are increasingly shifting away from a focus on a relatively small number of hits (mainstream products and markets) at the head of the demand curve, and moving toward a huge number of niches in the tail. In an era without the constraints of physical shelf space and other bottlenecks of distribution, narrowly targeted goods and services can be as economically attractive as mainstream fare.

The long tail is going to be “as economically attractive” as the head of the tail. That’s what he’s saying, right? If so, then he is wrong, for the reasons described above.

But maybe that isn’t what he’s saying. Consider:

This is why I’ve described the Long Tail as the death of the 80/20 Rule, even though it’s actually nothing of the sort. The real 80/20 Rule is just the acknowledgment that a Pareto distribution is at work, and some things will sell a lot better than others, which is as true in Long Tail markets as it is in traditional markets. What the Long Tail offers, however, is the encouragement to not be dominated by the Rule. Even if 20 percent of the products account for 80 percent of the revenue, that’s no reason not to carry the other 80 percent of the products. In Long Tail markets, where the carrying costs of inventory are low, the incentive is there to carry everything, regardless  of the volume of its sales. Who knows—with good search and recommendations, a bottom 80 percent product could turn into a top 20 percent product.

Here he seems to be saying that the 80/20 Rule will always remain true, but that shouldn’t stop us from realizing how important the long tail is in our lives, and how much more important it will be in the future as we get ever more diversity of choices in the relatively niche. Moreover, companies should continue to extend their long tail offers because, at any moment, one of them might suddenly jump to the head of the tail. So a Kindle book that’s only selling a handful per year may suddenly go viral and make Amazon a ton of money.

If that’s what he believes, then he is correct. But the mixture of the bad accounting of the sort in the top 100,000 books example above, statements such as the one quoted above about what “the theory of the Long Tail can be boiled down to”, and this last quote about the 80/20 rule, force me to conclude that Anderson’s thinking is simply muddled on this particular point.

Credit Where Credit is Due

Finally, if there’s one thing that I think we can all agree with Anderson on, it is that the expansion of the long tail has greatly increased the quality of our lives. Whether it’s people like Scott Sigler who has managed to make a living from his fans, or the passionate community of a small subreddit, there is an ever expanding virtual ocean of choices in the long tail today.

Chris Anderson argued that the fact that something is not a hit of the blockbuster variety does not mean it is a miss. There are some things that are much more valuable to a small group of people than they are to everyone else, thereby precluding their ability to become a blockbuster. There are also some things that might be equally appealing to the same number of people as a blockbuster, but they simply were not lucky enough to be among the few that won that particular lottery.

All of us live in both the head of the tail and the long tail, and I’m glad that Anderson convinced so many of the value of the latter.

The Collision of the Personal and the Professional


Eight years ago, when I was a pretentious, know-it-all 19-year-old, the conversation about new media was dominated by the rhetoric of bloggers and journalists, citizen and mainstream media. I had seen the blogosphere call out Dan Rather for running with forged documents as evidence. I learned of the role they played in making sure Trent Lott’s statements saw the light of day.

As far as I was concerned, newspapers and news outlets in general were old hat on their way to extinction, and blogs were the future.

What did I think this meant?

It meant that newspapers would unbundle. It meant that articles on the Iraq War or science features written by journalists with little background in the subject matter would be replaced by people living in Iraq, and actual scientists, who would have blogs. This wasn’t all in my head–such blogs existed and have only grown more numerous.

My thoughts on whether anyone would make money on this new way of things, and how, went back and forth. But I thought the future looked more like Instapundit and Sandmonkey than like The New York Times and The Washington Post.

As I have witnessed the evolution of the web over the years, aged to a point beyond a number ending in -teen, and followed the conversation and research on new media, my point of view has changed–to say the least.

It’s not simply that it was wrong, but that it was far too narrow. It has not only become clear that professional media, in some form, is here to stay. It has also become clear that the old blog vs mainstream media perspective misses the big picture.

What has happened is that many activities that we conducted in our personal lives have moved online; they have become digital and they have become some approximation of public. This has big implications for other people’s professions–one tiny corner of which is the impact that personal blogs have had on professional media. But it also has an impact on our own professional lives.

In short, the personal and the professional are colliding on a number of fronts. How this collision will play out is an open question.


The vast majority of my conversations with nearly all of my friends and family occur in a digital format. It happens on Twitter, Facebook, and Tumblr. It happens in email, in text messages, and in Google Talk chat windows. A very large proportion of this is public or semi-public.

I also enjoy writing about subjects that I’m thinking about. For that reason, I’ve maintained a blog in one form or another since 2004. I have never made one red cent off of my blogging. It has always been something I’ve done out of enjoyment of the writing itself.

Before the Internet, my writing would undoubtedly have been relegated to the handful of friends I could strong-arm into looking at some copies I made for them. I certainly wouldn’t be able to ask this of them on a very regular basis, so most of my writing would have remain unread–or, discouraged, I would have written a lot less.

The thing I enjoyed about blogging from the beginning was that it provided me with a place to put my writing where people could find it, without me having to make the imposition of bringing it to them. However, translating this private analogue activity into a public and digital one has implications beyond this simple convenience.

For one thing, it makes it possible for me to connect with new people who share my interests from anywhere in the world. It can also have implications for my professional life. If I write something insulting about my coworkers, or, say, something extremely racist, odds are it could get me fired and possibly have an impact on my long-term employability.

Conversely, just as I can discover and be discovered by new friends, I can also discover and be discovered by people who might provide me with a career opportunity–and indeed this happened to me earlier this year.

When enough enthusiasts move online in this manner, it begins to have consequences for the world of professional writing in general. One lone guy blogging about a few esoteric subjects isn’t going to have much of an impact. Over 180 million people writing about everything under the sun will have some serious implications. If we take Sturgeon’s Law at face value and say that you can throw 90 percent of that in the garbage, we’re still talking about tens of million of people writing pieces of average to excellent quality.

This is a dramatic expansion in the supply of written works. This has understandably made professional producers of written words sweat more than a little. One way of looking at this is from the old blog vs mainstream media perspective. A better way to look at it is from the understanding that any professional content outlet is going to have to adapt to the new reality of personal production if they want to survive.

That process of adaptation has been messy and is still ongoing.


What my 19-year-old self did not realize is that the media business has never really sold information. It has sold stories, it has sold something for groups to rally around and identify themselves with or against. There is still money to be made by selling this product. Clay Johnson has documented some methods that he finds vile, but there are plenty of perfectly respectable ways to do it as well.

Take The Verge–a technology site that launched last year. It does not suffer from the baggage of a legacy business–it was born online and lives online. It was created by a group writers from Engadget, another professional outlet that was born on the web, who thought they could do better on their own. I have argued that their initial success was made possible in part by the fact that the individual writers had built up a community around them, through their podcast and through their personal Twitter accounts.

The Verge invests a lot in building its community. The content management tools it offers in its forums are, they claim, just as powerful as the tools they themselves use to write posts. They frequently highlight forum posts on their main page. Their writers engage with their readers there and on various social media.

Another way that the professional world has adapted is by treating the group of unpaid individuals producing in their space as a sort of gigantic farm system for talent and fame. This system is filled with simple enthusiasts, but also includes a lot of people consciously trying to make the leap to a career in what they’re currently doing for free. Either way, a tiny fraction of this group will become popular to varying extents. Rather than competing with this subset, many existing professional operations will simply snap these individuals up.

Take Nate Silver, the subject of much attention this election cycle. He started writing about politics in a Daily Kos diary, then launched his own blog on his own domain. Eventually, this was snapped up by The New York Times. The article on this is telling:

In a three-year licensing arrangement, the FiveThirtyEight blog will be folded into NYTimes.com. Mr. Silver, regularly called a statistical wizard for his political projections based on dissections of polling data, will retain all rights to the blog and will continue to run it himself.

In recent years, The Times and other newspapers have tapped into the original, sometimes opinionated voices on the Web by hiring bloggers and in some cases licensing their content. In a similar arrangement, The Times folded the blog Freakonomics into the opinion section of the site in 2007.

Forbes did this with Modeled Behavior; Andrew Sullivan’s Daily Dish has done this with The Atlantic and now The Daily Beast. In publishing, Crown did this with Scott Sigler, and St. Martin’s Press did this with Amanda Hocking.

Suffice to say, these markets continue to be greatly disrupted. However, I do not think the adapted, matured versions of these markets will involve the utter extinction of professional institutions.


I consider my Twitter account to be extremely personal. No one is paying me to be there. With a handful of exceptions, I don’t have any professional relationships with the people I follow or am followed by there.

But there are definitely people who I feel have followed me because of some notion that it might help their career. Not because I’m some special guy who’s in the know, but because they think, say, that following everyone who seems to talk a lot about social media will somehow vaguely translate into success in a career in that industry. A lot of people who consider Twitter a place for human beings to talk to one another as private individuals have a low opinion of such people.

But I cannot deny that I have, on occasion, used Twitter to my professional advantage. And it’s not as though there’s a line in the sand for any of these services stating FOR PERSONAL USE ONLY. It’s difficult for journalists of any kind to treat anything they say in public as something that can be separated from their profession. I have seen some create distinct, explicitly labeled personal Twitter accounts, with protected tweets. Of course, Jeff Jarvis would point out that they are merely creating another kind of public by doing so.

Moreover, more and more services we use in our personal lives are having implications for our employers. How many of us have had an employer ask us to “like” the company page on Facebook? Or share a link to a company press release? These services are far too new for us to have expectations set about them. Is this overstepping the boundaries of what is acceptable, or is this a legitimate professional responsibility we have to our employers?

In a world where a personal project or an answer on Stack Overflow can be added to your resume when applying for a job, the line between personal and professional is not quite as sharp as it used to be.

Take Marginal Revolution as an example. Is it a personal or a professional blog? Certainly Tyler Cowen and Alex Tabarrok are not paid to write what they post. But they are using the blog as a venue for participating in the larger conversation of the economics profession. Of course, they also post on any number of specific subjects that catch their interest. It is both a platform to promote their books, as well as to solicit advice from their readers on what restaurants to check out when they are traveling.

Are categories like “personal” or “professional” even useful for describing things like Marginal Revolution? Is it an exceptional case, or–its particular level of popularity set aside–is it the new normal?

Fanboy Politics and Information as Rhetoric

News has to be subsidized because society’s truth-tellers can’t be supported by what their work would fetch on the open market. However much the Journalism as Philanthropy crowd gives off that ‘Eat your peas’ vibe, one thing they have exactly right is that markets supply less reporting than democracies demand. Most people don’t care about the news, and most of the people who do don’t care enough to pay for it, but we need the ones who care to have it, even if they care only a little bit, only some of the time. To create more of something than people will pay for requires subsidy.

-Clay Shirky, Why We Need the New News Environment to be Chaotic

There are few contemporary thinkers that I respect more on matters of media and the Internet than Clay Shirky, but his comment about how much reporting “democracies demand” has bothered me since he wrote it nearly a year ago now. I think the point of view implied in the quoted section above misunderstands what reporting really is, as well as how democracies actually work.

To understand the former, it helps to step away from the hallowed ground of politics and policy and focus instead on reporting in those areas considered more déclassé. The more vulgar subjects of sports, technology, and video games should suffice.

Fanboy Tribalism

One of the most entertaining things about The Verge’s review of the Lumia 900 was not anything editor-in-chief Joshua Topolsky said in the review itself. No, what I enjoyed most was the tidal wave of wrath that descended upon him from the Windows Phone fanboys, who it seemed could not be satisfied by anything less than a proclamation that the phone had a dispensation from God himself to become the greatest device of our time. The post itself has over 2,400 comments at the moment I’m writing this, and for weeks after it went up any small update about Windows Phone on The Verge drew the ire of this contingent.

The fanboy phenomenon is well known among tech journalists, many of whom have been accused of fanboyism themselves. It’s a frequent complain among the Vergecast’s crew that when they give a negative review to an Android phone, they are called Apple fanboys, when they give a negative review to an Windows Phone device, they are called Android fanboys, and so on.

To the diehard brand loyalist, the only way that other people could fail to see their preferred brand exactly the same way that they see it is if those other people have had their judgment compromised by their loyalty to some other brand. So Joshua Topolsky’s failure to understand the glory that is the Lumia 900 stems from the fact that he uses a Galaxy Nexus, an Android device, and his Android fanboyism makes it impossible for him to accurately judge non-Android things.

There came a certain moment when I realized that fanboy tribalism was a symptom of something innate in human nature, and that you saw it in every subject that had news and reporting of some sort. It may have become cliché to compare partisan loyalty with loyalty to a sports team, but the analogy is a valid one. Just as there are brand fanboys, there are sports team fanboys and political party fanboys.

Back in Middle School, I got really wrapped up in this–as a Nintendo fanboy. I had a friend that was a really big Playstation fanboy, and we had the most intense arguments over it. I don’t think I’ve ever had arguments that got as ferocious as those since–not about politics, not about morality, not about anything. We would each bring up the facts we knew that each of us thought should have made it obvious which console was superior and then get infuriated that the other side didn’t immediately concede defeat. I personally always came prepared with the latest talking points from Nintendo’s very own Pravda, Nintendo Power Magazine.

Cognitive Biases and Group Dynamics

Cognitive science has a lot to say about why people act this way. A lot of progress has been made in cataloging the various biases that skew how human beings see the world. Acknowledging that people have a confirmation bias has become quite trendy in certain circles, though it hasn’t really improved the level of discourse. My favorite trope in punditry these days is when one writer talks about how a different writer, or a politician they disagree with, can’t see the obvious truth because of their confirmation biases. Ignoring the fact that the writer himself has the very same bias, as all humans do!

Most of the discussion around cognitive biases centers on how they lead us astray from a more accurate understanding of the world. The more interesting conversation focuses on what these biases emerged to accomplish in the first place, in the evolutionary history of man. The advantages to cementing group formation in hunter gatherer societies is something that has been explored by moral psychologist Jonathan Haidt in his recent book The Righteous Mind. Arnold Kling has an excellent essay where he applies Haidt’s insights to political discourse.

The fact is that even in our modern, cosmopolitan world, we human beings remain a tribal species. Only instead of our tribes being the groups we were born among and cooperate with in order to survive, we have the tribe of Nintendo, the tribe of Apple, and the tribe of Republicans.

When the Apple faithful read technology news, they aren’t looking for information, not really. They’re getting a kind of entertainment, similar to the kind that a Yankee fan gets when reading baseball news. Neither have any decision that they are trying to inform.

Political news is exactly the same. When a registered Democrat reads The Nation, we like to think that there is something more sophisticated going on than our Apple or Yankee fan. But there is not. All of them might as well be my 13-year-old self, reading the latest copy of Nintendo Power. The Democrat was already going to vote for the Democratic candidate; it doesn’t matter what outrageous thing The Nation article claimed that Republicans were doing lately.

Information as Rhetoric

I think that the fear that there might not be enough truth-seekers out there fighting to get voters the salient facts about the rich and power is misplaced for a few reasons. For one thing, in this day and age, it is much easier to make information public than it is to keep it secret. For another, it is rarely truth-seekers that leak such information–it is people who have an ax to grind.

The person that leaked the emails from the Climate Research Unit at the University of East Anglia wasn’t some sort of heroic investigative journalist type with an idealistic notion of transparency. They were undoubtedly someone who didn’t believe in anthropogenic global warming, and wanted to dig up something to discredit those who did. He was a skeptic fanboy, if you like, and he wanted to discredit the climate fanboys.

The people that get information of this sort out to the public are almost always pursuing their own agendas, and attempting to block someone else’s. It’s never about truth-seeking. That doesn’t invalidate what they do, but it does shed a rather different light on getting as much information as “democracies demand”. Democracies don’t demand anything–people have demands, and their demands are often to make the people they disagree with look like idiots and keep them from having any power to act on their beliefs.

To satisfy either their own demands or that of an audience, some people will pursue information to use as a tool of rhetoric.

How Democracies Behave

Let us think of this mathematically for a moment. If information is the input, and democracy is the function, then what is the output?

I’m not going to pretend to have a real answer to that. There’s an entire field, public choice, with scholars dedicating a lot of research and thought to understanding how democracies and public institutions in general behave and why. My father has spent a lot of time thinking about what impact information in particular has on political and social outcomes. I am no expert on any of these subjects, and will not pretend to be.

I will tentatively suggest, however, that people do not vote based on some objective truth about what benefits and harms us as a society. I think people vote based on their interests. That is, narrow material interest–such as whether a particular law is likely to put you out of work of funnel more money your way. But also their ideological or tribal interest–whether it advances a cause you believe in, or a group you consider yourself a part of.

So I don’t really see a reason to insist on subsidizing journalism. All that will accomplish is bending those outlets towards the interests of the ones doing the subsidizing.