Unleash the Practitioners

Richard Dawkins is famously optimistic about human knowledge, especially within the confines of science. He is–understandably–allergic to the brand of postmodernist who believes that reality is simply a matter of interpretation, or cultural narrative. He has a much repeated one-liner that comes off as quite devastating–“There are no postmodernists at 30,000 feet.

It’s quite convincing. Engineers were able to make airplanes because of knowledge that was hard-won by the scientific community. The latter developed and tested theories, which the former could then put to use in order to get us moving about in the air at 30,000 feet. Right?


Historian Philip Scranton has done extensive work demonstrating that the original developers of the jet engine had no idea of the theory behind it, which was only developed after the fact. The jet engine was arrived at through tinkering and rote trial and error.

Dawkins was correct that there is a hard reality that is undeniable, and led to many failed prototypes. But the background story of science that he subscribes to is simply incorrect in this instance. Scientists didn’t develop theory which practitioners could apply; the practitioners invented something that scientists then felt the need to explain.

What’s amazing is how often this turns out to be the case, once you start digging.

Practitioners Elevated Us to New Heights

If there is one book that should be mandatory reading for every student of history, it is Deirdre McCloskey’s Bourgeois Dignity. It lays out in stark fashion just how little we know about what caused the enormous explosion in our standard of living that started over two hundred years ago. She systematically works through every attempted explanation and effectively eviscerates them. Issues of the day seem small when put in the perspective of a sixteen-fold growth in our standard of living (conservatively measured), and the utter inability of theorists to explain this phenomena is humbling.

For our purposes here we focus on Chapter 38: “The Cause Was Not Science”.

We must be careful when throwing around words like science, as it means many things to many people. What McCloskey is referring to is the stuff that generally gets grouped into the Scientific Revolution; the high theory traded by the Republic of Letters.

The jet engine example I mentioned earlier is exactly the sort of thing McCloskey has in mind. Take another example, from the book:

“Cheap steel,” for example, is not a scientific case in point. True, as Mokyr points out, it was only fully realized that steel is intermediate between cast and wrought iron in its carbon content early in the nineteenth century, since (after all) the very idea of an “element” such as carbon was ill-formed until then. Mokyr claims that without such scientific knowledge, “the advances in steelmaking are hard to imagine.” I think not. Tunzelmann notes that even in the late nineteenth century “breakthroughs such as that by Bessemer in steel were published in scientific journals but were largely the result of practical tinkering.”” My own early work on the iron and steel industry came to the same conclusion. Such an apparently straightforward matter as the chemistry of the blast furnace was not entirely understood until well into the twentieth century, and yet the costs of iron and steel had fallen and fallen for a century and a half.

This story plays out over and over again–the hard work of material progress is done by practitioners, but every assumes that credit belongs to the theorists.

It turns out that it isn’t even safe to make assumptions about those industries where theory seems, from the outside, to really dominate practice. What could be more driven by economic and financial theory than options trading? Surely this must be a case more in line with traditional understandings of the relationship between theory and practice.

And yet Nassim Taleb and Espen Gaarden Haug have documented how options traders do not use the output of theorists at all, but instead have a set of practices developed over time through trial and error.

Back to McCloskey:

The economic heft of the late-nineteenth-century innovations that did not depend at all on science (such as cheap steel) was great: mass-produced concrete, for example, then reinforced concrete (combined with that cheap steel); air brakes on trains, making mile-long trains possible (though the science-dependent telegraph was useful to keep them from running into each other); the improvements in engines to pull the trains; the military organization to maintain schedules (again so that the trains would not run into each other: it was a capital-saving organizational innovation, making doubletracking unnecessary); elevators to make possible the tall reinforced concrete buildings (although again science-based electric motors were better than having a steam engine in every building;  but the “science” in electric motors was hardly more than noting the connection in 1820 between electricity and magnetism-one didn t require Maxwell’s equations to make a dynamo); better “tin” cans (more electricity); asset markets in which risk could be assumed and shed; faster rolling mills; the linotype machine; cheap paper; and on and on and on. Mokyr agrees: “It seems likely that in the past 150 years the majority of important inventions, from steel converters to cancer chemotherapy, from food canning to aspartame, have been used long before people understood why they worked…. The proportion of such inventions is declining, but it remains high today.”

In 1900 the parts of the economy that used science to improve products and processes-electrical and chemical engineering, chiefly, and even these sometimes using science pretty crudely-were quite small, reckoned in value of output or numbers of employees. And yet in the technologically feverish U.K. in the eight decades (plus a year) from 1820 to 1900, real income per head grew by a factor of 2.63, and in the next eight “scientific” decades only a little faster, by a factor of 2.88. The result was a rise from 1820 to 1980 of a factor of (2.63) • (2.88) = 7.57. That is to say-since 2.63 is quite close to 2.88-nearly half of the world-making change down to 1980 was achieved before 1900, in effect before science. This is not to deny science its economic heft after science: the per capita factor of growth in the U.K. during the merely twenty years 1980 to 1999 was fully 1.53, which would correspond to an eighty-year factor of an astounding 5.5. The results are similar for the United States, though as one might expect at a still more feverish pace: a factor of 3.25 in per capita real income from 1820 to 1900, 4.54 from 1900 to 1980, and about the same frenzy of invention and innovation and clever business plans as Britain after 1980.

Note that McCloskey is not saying that science hasn’t made any contribution at all, or that the contribution is small. Taleb does not make that claim either. What is at issue here is that the contribution of science to our material well being is not just overblown, but overblown by several orders of magnitude. McCloskey ultimately concludes that “We would be enormously richer now than in 1700 even without science.”

Yet They Are Everywhere in Chains

Alex Tabarrok thinks to road to the innovation renaissance is through focusing such funding on STEM majors and tailoring our patent system so it only provides protection for industries like pharmaceuticals where it appears to make the biggest positive difference. Even Michele Boldrin and David Levines, who otherwise believe in abolishing intellectual property entirely, agree with Tabarrok’s exception. And Tyler Cowen believes that part of what we need to do in order to climb out of the Great Stagnation is elevate the status of science and scientists.

With respect to these distinguished gentlemen, I disagree. The road to greater prosperity lies in breaking the shackles we have increasingly put around practitioners, and elevating their work, and their status.

Whether or not the specific skills implied by a STEM career contribute to progress, it is quite clear that what is taught in the classroom is unlikely to be what is practiced in the field–since the teaching is done by teachers, who are not as a general rule practitioners. And let us return to Scranton, McCloskey, and Taleb: the vast majority of our material wealth came from tinkering that is decidedly non-STEM.

If you want to make progress in pharmaceuticals, don’t do it by enforcing (or worse, expanding) patents, which inhibit trial and error by those who do not hold the patent. Instead, remove the enormous impediments we have put up to experimentation. The FDA approval process imposes gigantic costs on drug development, including the cost of delaying when a drug comes to market and greatly reducing the number of drugs that can be developed. There is an entire agency whose sole purpose is to regulate medical trials.

It is all futile–as I have said before, in the end, the general market becomes the guinea pigs for many years after the drug is available, and no conceivable approval process can change that fact. But if you think differently–if you think theorists can identify what treatments are likely to succeed ahead of time, and are capable of designing experiments that will detect any serious side-effects, then our current setup makes a lot of sense.

But that is not the reality. Nassim Taleb argued in his latest book that we should avoid treating people who are mostly healthy, because of the possibility of unknown complications. On the other hand, we should take way more risks with people who are dangerously ill than our current system allows.

The trend is going the other way. Because we have made developing drugs so expensive, it is much more profitable to try to come up with the next advil, that will be used to ease symptoms of a mild disease but purchased by a very wide market, than a cure for rarer but more deadly diseases. And it doesn’t matter what they try to do, because the ultimate use of a drug is discovered through practice, not through theory. But it does matter, in the sense that we’re currently wasting many rounds of trial and error taking putting people at risk to attempt to make small gains.

Thalidomide remains the iconic example of how this works. It was marketed as an anti-nausea drug but caused birth defects when pregnant women took it. Yet it is widely used today, for treating far more serious  problems than nausea.

You Cannot Banish Risk

Aside from overestimating the abilities of theorists, the reason the discovery process of practitioners has been so hamstrung is because people are afraid of the errors inevitable in a process of trial and error. Thalidomide babies were an error, a horrible one. But there is no process, no theory that will allow us to avoid unforeseen mistakes. The only path to the drug that cures cancer or AIDS or malaria is one that involves people being hurt by unforeseen consequences. As Neal Stephenson put it, some people have to take a lot of risks in order to reduce the long run risk for all of us.

And along with the unforeseen harms, there are unforeseen gains as well. Penicillin, arguably the single greatest advancement in medicine in the 20th century, was an entirely serendipitous discovery.

I do not know if the stories of a great stagnation are accurate, but I agree with Peter Thiel that our regulatory hostility towards risk taking impoverishes us all, and allows many avoidable deaths every year.

The only way to start pushing the technological frontier again like we did at the peak of the Industrial Revolution is to empower the practitioners rather than impair them.

Unleash the practitioners and progress will follow.

When to Medicate

“We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.”

-Nassim Taleb, Antifragility

In an uncertain world, Taleb wants us to stop thinking we know the probabilities and instead think more seriously about payoffs.

Let’s say a new pill comes to market that claims to be able to cure the common cold, quickly and with minimal side-effects. What is the potential payoff from taking this pill? At best, you will end your cold more quickly than you otherwise would have. And at worse?

You may be tempted to say that the downside risk is not very large, as the pill had to go through a test process with the company that developed it, examined by the FDA. The process can take years–surely any problems would have been detected by its completion, right?

Uncertainty and Complexity

Wrong–any test is always going to have limits, by necessity. It might involve only one, two, or three thousand test subjects–whose selection is not truly random. Even if we could treat the statistical results with complete confidence, any effect that only occurs in a tiny fraction of this sample would impact a large number of people once it hits a market of millions. And any effect that doesn’t really visibly show up until a time period longer than the approval process will be missed entirely.

The bottom line is that the general patient population ends up being guinea pigs sooner or later, and there is no avoiding it. It’s for this reason that Robin Hanson always advises his students to avoid the “cutting edge” medical treatments in favor of those that have been tested by time. Treatments that have been around for 50 or 100 years are much less likely to have undetected risks than treatments that are 20, 10, or 5 years old–or worst of all, brand new.

Every new treatment has a large, unknown downside risk of undetected side-effects. Moreover, every new treatment has a similarly large, unknown downside risk of interaction with other treatments already on the market. Even if the testing process turns out to have revealed every possible side-effect, it is literally impossible for it to have detected every possible interaction–consider that some interactions will end up being with treatments that didn’t exist at the time of testing!

What is there to Gain?

Taleb’s point isn’t sophistry. Consider the most famous case of undetected harm in the 20th Century–Thalidomide. I had known that after Thalidomide made it to market, it caused a rash of birth defects. What I hadn’t realized was that it was being used to treat morning sickness.

So in the best case scenario, the women taking Thalidomide would have had their nausea pass more quickly and be otherwise unchanged. But the worse case scenario was clearly unknown, as history proved. The question you have to ask yourself when you’re receiving some treatment today is whether what you’re being treated for is worth the risk of unwittingly stumbling upon the next Thalidomide.

If it’s something that our body is capable of dealing with on its own, Taleb’s advice is to forego treatment entirely. When the potential payoff is so small, errors on the part of the medical establishment will only hurt us.

This doesn’t mean that we should become anti-medicine. Instead, we should focus on extreme cases, and be willing to take more risks in those cases than our current regulatory and cultural environment allows. Taleb:

And there is a simple statistical reason that explains why we have not been able to find drugs that make us feel unconditionally better when we are well (or unconditionally stronger, etc.): nature would have been likely to find this magic pill by itself. But consider that illness is rare, and the more ill the person the less likely nature would have found the solution by itself, in an accelerating way. A condition that is, say, three units of deviation away from the norm is more than three hundred times rarer than normal; an illness that is five units of deviation from the norm is more than a million times rarer!

If we focus on those cases that were not likely to have emerged in a significant way during the process of natural selection that brought us to where we are today, we minimize the amount of downside risk from unforeseen side-effects that we’re exposing ourselves to, and we’re maximizing the potential gains of treatment.

Thus, the answer is not to increase regulation of the pharmaceutical industry or expand the FDA approval process. The latter is already so long that it allows lives to be lost while life-saving drugs take forever to come to market.

The Impulse to Intervene

The answer isn’t to just take what your doctor tells you at face value, either.

If 9 times out of 10, or 9.99 times out of 10, your doctor should tell you not to be treated in any manner, that is unfortunately not likely to be what you hear when you arrive for your appointment.

Doctors are simply more likely to want to do something rather than nothing. Consider the following, again from Taleb:

Consider this need to “do something” through an illustrative example. In the 1930s, 389 children were presented to New York City doctors; 174 of them were recommended tonsillectomies. The remaining 215 children were again presented to doctors, and 99 were said to need the surgery. When the remaining 116 children were presented to yet a third set of doctors, 52 were recommended the surgery. Note that there is morbidity in 2 to 4 percent of the cases (today, not then, as the risks of surgery were very bad at the time) and that a death occurs in about every 15,000 such operations and you get an idea about the break-even point between medical gains and detriment.

In other words, doctors were likely to advise a similar proportion of whatever group they were presented to get the surgery–despite the fact that other doctors had already lumped them into the group that didn’t need treatment!

Moreover, this problem is not confined to doctors in the 1930s. Consider how doctors and hospitals have responded to the scientific consensus that mammograms do not save lives on net.

For years now, doctors like myself have known that screening mammography doesn’t save lives, or else saves so few that the harms far outweigh the benefits. Neither I nor my colleagues have a crystal ball, and we are not smarter than others who have looked at this issue. We simply read the results of the many mammography trials that have been conducted over the years. But the trial results were unpopular and did not fit with a broadly accepted ideology—early detection—which has, ironically, failed (ovarian, prostate cancer) as often as it has succeeded (cervical cancer, perhaps colon cancer).

More bluntly, the trial results threatened a mammogram economy, a marketplace sustained by invasive therapies to vanquish microscopic clumps of questionable threat, and by an endless parade of procedures and pictures to investigate the falsely positive results that more than half of women endure. And inexplicably, since the publication of these trial results challenging the value of screening mammograms, hundreds of millions of public dollars have been dedicated to ensuring mammogram access, and the test has become a war cry for cancer advocacy. Why? Because experience deludes: radiologists diagnose, surgeons cut, pathologists examine, oncologists treat, and women survive.

In short, it is uncertain how deadly the cancers that mammograms detect early are, but it is certain that the invasive tactics required to combat such cancers put the patient at risk. The study that the above article begins with describes how the rise in mammograms has not resulted in a drop in the late-stage, definitely dangerous form of breast cancer.

There are any number of possible stories you can tell about why doctors will opt to do something rather than nothing, even when every intervention–needless or needed–carries the risk of iatrogenesis.

A Robin Hanson-style story (PDF) would go as follows: doctors are simply meeting a market demand. People are not really looking for what is medically best for them when they make an appointment, any more than consumers of news are trying to become more informed. What patients want is comfort–the comfort of someone who knows what they’re doing, taking charge of the decisions regarding our health. And few people take comfort in being told to do nothing–even if it’s the wisest choice. So the market produces doctors that satisfy the demand for comfort, rather than the demand for the best possible health outcomes.

The story subscribed to by Taleb and by the doctor quoted above is even more straightforward–more money is spent on intervention that non-intervention, so the incentives are clear. I’m not so sure about this one, as the doctors performing the diagnosis aren’t usually the ones who get paid for the procedure.

But the story doesn’t matter. The phenomenon of being too intervening too often is well documented, whatever the reason it occurs.

If what you’re interested in is your health, rather than comforting answers from a credentialed expert, then Taleb’s argument is worth considering. Do you really need to receive treatment for a bug that you’ll work through eventually, or for baldness, or for nausea that was always going to be temporary?

Why risk losing everything when you have so little to gain?

Another way to view it: the iatrogenics is in the patient, not in the treatment. If the patient is close to death, all speculative treatments should be encouraged— no holds barred. Conversely, if the patient is near healthy, then Mother Nature should be the doctor.

New Beginnings

I had not intended to make a regular thing of summarizing my year. I wrote the post on 2011 because that was a particularly bad year, and I had things I wanted to work through the only way I know how–by writing about them. This year, however, turned out to be exceptional in precisely the opposite sense as last year–a lot of wonderful things happened in my life and in the lives of those important to me.

Of course, 2012 was always going to be an exceptional year for me in one important way–my wedding was set for August 25th, meaning that the whole process would characterize two-thirds of the year. That would have been enough to make it an exceptional year. But a lot of other stuff happened, and once again I cannot curb my compulsion to write about it.

Losing a Friend

2012 did not start well, as Tiger, our old family dog, passed away.

Can this be compared to the losses my family experienced the year before? Of course not.

But Tiger was almost 12 years old. He is the only dog for which it could conceivably said that we grew up together. You can certainly say it for my sister and brother, who are much younger than I and were quite small when he was a puppy.

It’s hard to describe how I felt about losing him. You get this little creature when he is only a few weeks old, and you watch him grow up. Then you watch him grow old. And you know, you always know, that he will be gone long before you have come anywhere near growing old yourself. Why do we put ourselves through this, again and again?

But of course, I will put myself through it again. There isn’t any reason I could articulate for what makes it worth it. But I could never say that getting Tiger was the wrong choice.

Tiger with ball

Getting Married

But that was in early January. Though I wanted to mention him, my brain has arbitrarily lumped his passing into 2011.

What 2012 was really about was getting married.

In a way, it was appropriate that our wedding fell on an election year. Turns out that planning a wedding is a bit of a political process.

I think I was more prepared than most, as my dad has been warning me about what wedding planning is like for basically as long as I’ve been alive. Still, laughing at old stories about the planning of my parents’ wedding is rather different from living the reality of planning my own.

Choices and responses that seem perfectly normal and acceptable under any other circumstances are suddenly imbued with significance. Decisions made for purely practical reasons come off as signals about how important you consider someone to be in your life. Tensions run high. Throw in all the logistical fun of planning any big event, and you’ll get the general picture–for those of you who aren’t already all too familiar with this process.

In the end, it was the happiest day of my life so far.

dancing at wedding

We lucked out in so many dimensions.

The weather was perfect–it was the best week of weather in the Boston area of the whole summer. There were no last minute logistical disasters. Our justice of the peace, bless her, was fantastic and entertaining. My friend and groomsman Alex brewed a special beer for the occasion–which was tremendously popular–and created a label using a picture of us taken by his wife Marley. Our dads knocked everyone’s socks off with their fantastic speeches. Her maid of honor gave an equally eloquent speech, and my best man embarrassed the hell out of me, as was his duty.

For all the stress of getting there, weddings, when they go well, are truly wonderful. So many people, from so many different times and parts of our lives, all come together. I have always known that I was exceptionally lucky for the family that I was born into, but I have been equally lucky in my friends. And I am unbelievably lucky in who I was standing next to, in front of all those friends and family, on that very day.

It was over in the blink of an eye, and then I was a husband, and I had a wife.

Still processing that one.

One of my favorite things about the wedding was seeing people from different parts of my life meeting one another. Also, seeing Internet friends meet each other–one of whom I met in person for the first time that day!



The next day, Catherine and I flew off to Paris to enjoy our honeymoon for a week and a half.

A New Job and New York

The time relaxing in Paris was just what the doctor ordered, because as soon as we returned to DC we had to shift gears and find an apartment in New York to move into by the end of the month.

Way back on March 31st, Catherine and I had taken a bus up to New York to stay with some friends for the weekend. A strangely large number of our friends had moved there recently, so we thought it high time to go up and spend some time with them. We stayed with Peter and Jordan, two good friends of ours (Peter was my best man). We didn’t do any tourist things, really–we lived life like our New York friends did for a couple of days.

It was a lot of fun. Neither of us had had any interest in living in New York before, but on the bus ride home, we discovered that we had each been swayed by seeing the residential side of it. We had no reason to move to New York, but for the first time we had come away feeling like we could see ourselves living there.

That very week I received an email from Eric Litman asking if I’d be interested in talking about a position at Medialets, a company based in New York.

The details of how this came about are very strange, and if you’re curious you can read about it here. Suffice to say that a blog post written four years prior and connecting to Eric on LinkedIn more recently were partly responsible for the opportunity.

It turned out to be a tremendous opportunity, in many dimensions. At the outset of the year, I thought I wanted to move in a more academic direction–hence my unfulfilled goal of trying to get a paper published. But Medialets oozes ambition, in a way that no company I’ve ever worked for has. It is in the middle of a rapidly changing industry–I doubt that by December of 2013 the company or the industry will look much like they do today. And Eric offered me a chance to be a part of that.

It has been very exciting, and between the nature of the job and the added move to a new city, it added an extra level of chaos to a year already beset by wedding planning.

The folks at Medialets were kind enough not to make me move until after my wedding. The first three months would be spent mostly working from home, and then coming up by train every other week for 2-3 days at a time.

This was mostly exciting at first, especially when they brought me up for my whole first week and also paid for Catherine to come, too. However, after a while, that kind of regular travel gets to be a bit much.

Throw in the fact that we needed to go to Massachusetts at least once this year to settle some wedding planning matters in person, and all that travel got to be downright tiring.

By the time we moved to New York, I was ready for the transition. As ready as I was going to be, anyway. With the help of a broker, I ended up seeing over 20 apartments; half of which I saw during my last trip up in September. We ended up in the Upper West Side, and have been pretty happy with the place.

It’s only been three months, but we already feel strangely comfortable in Manhattan. As someone who has had to be driven or drive everywhere for pretty much my whole life, it felt weird to sell off my car before we moved here. But it’s just so ridiculously easy to get around here without one (not to mention that I couldn’t exactly afford to keep one in some Manhattan garage!).

It’s too soon for me to write up all of my feelings about living in New York. Perhaps I will save it for another post, after I’ve lived here longer. But aside from two years in which my dad worked at the embassy in Paraguay, I have never lived anywhere but the DC area. And I have never lived more than a 30 minute drive from my immediate family. This was a big change, right on the heels of getting married.

We also ended up being in New York at a rather peculiar time.

Sandy: A Weeklong Interlude

We had been in New York for almost exactly a month. We had spent time with our friends who lived here. My parents had visited. The weekend before the storm, some friends from Virginia had visited.

We had just begun to settle into our new routines when Sandy came along and put half the city in the dark, to say nothing of the flooding.

We were very lucky. We’re way up in the 90s on the Upper West; the land is quite elevated and we never lost power. The businesses in our area never really closed, save perhaps during the height of the storm (though it isn’t like we were going out to check at that point).

Peter and Jordan live down in the 30’s on the east side; they lost power and running water. So they crashed with us for a few days.

That week was very strange. Medialets’ office was without power as well, though for me this was a moot point since I couldn’t really get to it without the subway. Since we had power, and since the nature of my work allows me to work from home easily, I thought I’d be pretty productive.

It didn’t quite work out that way. It wasn’t just the office; half of my coworkers didn’t have access to power or an Internet connection. Still, a lot of my job involves contact with people at other companies. I thought there was a good chance I’d be able to make progress with them.

Those people are all at ad agencies, ad tech companies, or media companies, though. Guess where nearly all of them were located? Lower Manhattan.

So I stayed online during the day and did what I could, working off of a system that was, at the time, diesel-fueled. Catherine continued to work from home as she normally would, for she was working remotely for the company in Virginia that she had been working for when we lived in DC. Jordan did what she could as well, and Peter caught up on his law school readings.

Your Huddled Masses
Your Huddled Masses

I really enjoyed having Peter and Jordan over, though I suspect that everyone but me grew a little tired of the close confinement. It turns out that my brain was wired for some kind of wolf pack mentality, where I’m happiest when I have my friends all in one space where we’re together all the time. So I had a great time.

It bears repeating just how lucky we were. Many people lost their homes to Sandy, some tragically lost even their lives. We lost access to the subway for a few days and gained some company, and then everything basically went back to normal.

Still it was a bit crazy to have this happen a mere month after moving here!

Another New Job

And since we hadn’t packed enough change into one year, Catherine went and got herself a new job as well!

Having the choice to work remotely was great when we were moving. If looking for an apartment at the eleventh hour was stressful, having to find the means to pay for it would have been more than doubly so!

But for a number of reasons it made sense to look for a job here. And the turnaround was amazingly fast–they reached out to her on a Monday and had made her an offer by that Friday.

Suffice to say that January of 2013 will begin with both of us working different jobs, in a different city, and with a different marital status than we had in January of 2012.


Though not as exciting as everything else that was going on, this was a particularly fun year of blogging.

The most exciting thing to happen was that Stories of Progress and Stagnation was linked to by a few big econ blogs–including Marginal Revolution and the Economist’s Free Exchange blog–leading to a few thousand views. A big deal for a blog that gets a couple dozen views on a good day!

I wrote the piece mostly because I had a lot of stories that seemed persuasive to me, only they basically contradicted each other. It was a simple case of thinking through writing. I certainly wasn’t trying to make an argument.

Interestingly, most commenters seemed to think that I was arguing for one or the other of the stories. Even Robin Hanson clearly believed this:

You write well Adam, but in the end I’m not persuaded. You push me toward accepting Cowen et al’s position, at least for rich nations for now.

Implying that I was arguing against the stagnationist point of view. Several other commenters, on the other hand, seemed contemptuous of my apparently hard-line stagnationist position!

I’m still not sure whether this division is a sign of the failure, or proof of the success, of my writing in this instance. But in any case I enjoyed my fifteen minutes of elevated attention.

Though nearly all of my posts this year were basically intellectual storytelling, the one that was by far the most fun to write was My Love Letter to Video Games. Over the years it has increasingly become clear that gaming is not just something you do in your home; it’s part of who you are, and it ties you to other people no less than loyalty to a sports team. In any case I used this as an excuse to take a trip down memory lane.


2012 was a year I’ll never forget, but I’m hoping the next one will be much less eventful. I’m ready for a little more peace, and a little less excitement.

Of course, I’ve also ended up in an industry that is changing at an insane pace. So, in my professional life, “peaceful” is probably not the adjective of choice.

But I’ll settle for being in the same city, with the same jobs, by this time next year!

Using Programming to Learn Math, Using Math to Learn Programming

When I was in High School, the tool of choice for any math class was the TI-83 graphing calculator. I believe this is still the case.

One cool thing about TI-83’s is that you can create custom programs for it, using a language called TI-BASIC. By going online, you could find all kinds of programs people had made for the things. Games were of course the most popular thing to download; simple things like Snake or a port of Oregon Trail.

For more practical purposes, there were programs that could, for example, solve any quadratic equation. The teachers did not approve of these. I wasn’t one to cheat, so I thought–is it really cheating if I make the programs myself, from scratch?

It was a great exercise. I often had to make multiple programs per unit, since the standard fare of math classes is to have problem sets with different givens and different unknowns (it might be the Y-intercept or it might be the slope that is unknown in the slope-intercept equation, or it might simply be Y). This forced me to really get to know the different equations inside and out, and it also tested the limits of my knowledge of what I could do in TI-BASIC.

But the teachers still did not approve of this. They took every possible opportunity to ask us to wipe the memory on our calculators. They discouraged using programs at all. They were so wedded to the traditional way of teaching and learning math that they saw this incursion of technology as nothing but a distraction, and a tool for cheating.

I can’t speak for every math teacher in the country, but I can say that mine were subscribing to a backwards way of thinking. Do you know how many of the kids in my Freshman geometry class still remember what they learned there? And of those, how many actually ended up with a practical application of that knowledge? I promise you that the number is very close to zero.

But what a wasted opportunity to encourage kids to get into programming!

A Co-Learning Program

I am not suggesting that we give up on teaching math, but I do think that how we teach it needs to be seriously rethought.

What if, at the same time that elementary school kids were being taught basic addition, subtraction, and multiplication tables, they were also learning the basic syntax of one programming language, like Python?

As students progressed in their learning of math concepts, it would become increasingly integrated with the programming classes. Math problem sets would require students to create programs, on the spot, that could solve whole classes of such problems. The actual programming class would be dedicated to teaching bits of Python that would specifically be helpful for solving such problems, without spelling out exactly how to accomplish that end.

In short, math would be used to demonstrate what programming was capable of accomplishing, and programming would be used to force students to think in depth about the nature of the math they were learning.

This concurrent, integrated learning program would begin in elementary school and follow students all the way to the end of High School. By which point they would know at least as much math as any other High School graduate, and have a mastery of at least one programming language that only a tiny minority of High School graduates (or anyone) can really boast of today.

The Collision of the Personal and the Professional


Eight years ago, when I was a pretentious, know-it-all 19-year-old, the conversation about new media was dominated by the rhetoric of bloggers and journalists, citizen and mainstream media. I had seen the blogosphere call out Dan Rather for running with forged documents as evidence. I learned of the role they played in making sure Trent Lott’s statements saw the light of day.

As far as I was concerned, newspapers and news outlets in general were old hat on their way to extinction, and blogs were the future.

What did I think this meant?

It meant that newspapers would unbundle. It meant that articles on the Iraq War or science features written by journalists with little background in the subject matter would be replaced by people living in Iraq, and actual scientists, who would have blogs. This wasn’t all in my head–such blogs existed and have only grown more numerous.

My thoughts on whether anyone would make money on this new way of things, and how, went back and forth. But I thought the future looked more like Instapundit and Sandmonkey than like The New York Times and The Washington Post.

As I have witnessed the evolution of the web over the years, aged to a point beyond a number ending in -teen, and followed the conversation and research on new media, my point of view has changed–to say the least.

It’s not simply that it was wrong, but that it was far too narrow. It has not only become clear that professional media, in some form, is here to stay. It has also become clear that the old blog vs mainstream media perspective misses the big picture.

What has happened is that many activities that we conducted in our personal lives have moved online; they have become digital and they have become some approximation of public. This has big implications for other people’s professions–one tiny corner of which is the impact that personal blogs have had on professional media. But it also has an impact on our own professional lives.

In short, the personal and the professional are colliding on a number of fronts. How this collision will play out is an open question.


The vast majority of my conversations with nearly all of my friends and family occur in a digital format. It happens on Twitter, Facebook, and Tumblr. It happens in email, in text messages, and in Google Talk chat windows. A very large proportion of this is public or semi-public.

I also enjoy writing about subjects that I’m thinking about. For that reason, I’ve maintained a blog in one form or another since 2004. I have never made one red cent off of my blogging. It has always been something I’ve done out of enjoyment of the writing itself.

Before the Internet, my writing would undoubtedly have been relegated to the handful of friends I could strong-arm into looking at some copies I made for them. I certainly wouldn’t be able to ask this of them on a very regular basis, so most of my writing would have remain unread–or, discouraged, I would have written a lot less.

The thing I enjoyed about blogging from the beginning was that it provided me with a place to put my writing where people could find it, without me having to make the imposition of bringing it to them. However, translating this private analogue activity into a public and digital one has implications beyond this simple convenience.

For one thing, it makes it possible for me to connect with new people who share my interests from anywhere in the world. It can also have implications for my professional life. If I write something insulting about my coworkers, or, say, something extremely racist, odds are it could get me fired and possibly have an impact on my long-term employability.

Conversely, just as I can discover and be discovered by new friends, I can also discover and be discovered by people who might provide me with a career opportunity–and indeed this happened to me earlier this year.

When enough enthusiasts move online in this manner, it begins to have consequences for the world of professional writing in general. One lone guy blogging about a few esoteric subjects isn’t going to have much of an impact. Over 180 million people writing about everything under the sun will have some serious implications. If we take Sturgeon’s Law at face value and say that you can throw 90 percent of that in the garbage, we’re still talking about tens of million of people writing pieces of average to excellent quality.

This is a dramatic expansion in the supply of written works. This has understandably made professional producers of written words sweat more than a little. One way of looking at this is from the old blog vs mainstream media perspective. A better way to look at it is from the understanding that any professional content outlet is going to have to adapt to the new reality of personal production if they want to survive.

That process of adaptation has been messy and is still ongoing.


What my 19-year-old self did not realize is that the media business has never really sold information. It has sold stories, it has sold something for groups to rally around and identify themselves with or against. There is still money to be made by selling this product. Clay Johnson has documented some methods that he finds vile, but there are plenty of perfectly respectable ways to do it as well.

Take The Verge–a technology site that launched last year. It does not suffer from the baggage of a legacy business–it was born online and lives online. It was created by a group writers from Engadget, another professional outlet that was born on the web, who thought they could do better on their own. I have argued that their initial success was made possible in part by the fact that the individual writers had built up a community around them, through their podcast and through their personal Twitter accounts.

The Verge invests a lot in building its community. The content management tools it offers in its forums are, they claim, just as powerful as the tools they themselves use to write posts. They frequently highlight forum posts on their main page. Their writers engage with their readers there and on various social media.

Another way that the professional world has adapted is by treating the group of unpaid individuals producing in their space as a sort of gigantic farm system for talent and fame. This system is filled with simple enthusiasts, but also includes a lot of people consciously trying to make the leap to a career in what they’re currently doing for free. Either way, a tiny fraction of this group will become popular to varying extents. Rather than competing with this subset, many existing professional operations will simply snap these individuals up.

Take Nate Silver, the subject of much attention this election cycle. He started writing about politics in a Daily Kos diary, then launched his own blog on his own domain. Eventually, this was snapped up by The New York Times. The article on this is telling:

In a three-year licensing arrangement, the FiveThirtyEight blog will be folded into NYTimes.com. Mr. Silver, regularly called a statistical wizard for his political projections based on dissections of polling data, will retain all rights to the blog and will continue to run it himself.

In recent years, The Times and other newspapers have tapped into the original, sometimes opinionated voices on the Web by hiring bloggers and in some cases licensing their content. In a similar arrangement, The Times folded the blog Freakonomics into the opinion section of the site in 2007.

Forbes did this with Modeled Behavior; Andrew Sullivan’s Daily Dish has done this with The Atlantic and now The Daily Beast. In publishing, Crown did this with Scott Sigler, and St. Martin’s Press did this with Amanda Hocking.

Suffice to say, these markets continue to be greatly disrupted. However, I do not think the adapted, matured versions of these markets will involve the utter extinction of professional institutions.


I consider my Twitter account to be extremely personal. No one is paying me to be there. With a handful of exceptions, I don’t have any professional relationships with the people I follow or am followed by there.

But there are definitely people who I feel have followed me because of some notion that it might help their career. Not because I’m some special guy who’s in the know, but because they think, say, that following everyone who seems to talk a lot about social media will somehow vaguely translate into success in a career in that industry. A lot of people who consider Twitter a place for human beings to talk to one another as private individuals have a low opinion of such people.

But I cannot deny that I have, on occasion, used Twitter to my professional advantage. And it’s not as though there’s a line in the sand for any of these services stating FOR PERSONAL USE ONLY. It’s difficult for journalists of any kind to treat anything they say in public as something that can be separated from their profession. I have seen some create distinct, explicitly labeled personal Twitter accounts, with protected tweets. Of course, Jeff Jarvis would point out that they are merely creating another kind of public by doing so.

Moreover, more and more services we use in our personal lives are having implications for our employers. How many of us have had an employer ask us to “like” the company page on Facebook? Or share a link to a company press release? These services are far too new for us to have expectations set about them. Is this overstepping the boundaries of what is acceptable, or is this a legitimate professional responsibility we have to our employers?

In a world where a personal project or an answer on Stack Overflow can be added to your resume when applying for a job, the line between personal and professional is not quite as sharp as it used to be.

Take Marginal Revolution as an example. Is it a personal or a professional blog? Certainly Tyler Cowen and Alex Tabarrok are not paid to write what they post. But they are using the blog as a venue for participating in the larger conversation of the economics profession. Of course, they also post on any number of specific subjects that catch their interest. It is both a platform to promote their books, as well as to solicit advice from their readers on what restaurants to check out when they are traveling.

Are categories like “personal” or “professional” even useful for describing things like Marginal Revolution? Is it an exceptional case, or–its particular level of popularity set aside–is it the new normal?