Unleash the Practitioners

Richard Dawkins is famously optimistic about human knowledge, especially within the confines of science. He is–understandably–allergic to the brand of postmodernist who believes that reality is simply a matter of interpretation, or cultural narrative. He has a much repeated one-liner that comes off as quite devastating–“There are no postmodernists at 30,000 feet.

It’s quite convincing. Engineers were able to make airplanes because of knowledge that was hard-won by the scientific community. The latter developed and tested theories, which the former could then put to use in order to get us moving about in the air at 30,000 feet. Right?

Wrong.

Historian Philip Scranton has done extensive work demonstrating that the original developers of the jet engine had no idea of the theory behind it, which was only developed after the fact. The jet engine was arrived at through tinkering and rote trial and error.

Dawkins was correct that there is a hard reality that is undeniable, and led to many failed prototypes. But the background story of science that he subscribes to is simply incorrect in this instance. Scientists didn’t develop theory which practitioners could apply; the practitioners invented something that scientists then felt the need to explain.

What’s amazing is how often this turns out to be the case, once you start digging.

Practitioners Elevated Us to New Heights

If there is one book that should be mandatory reading for every student of history, it is Deirdre McCloskey’s Bourgeois Dignity. It lays out in stark fashion just how little we know about what caused the enormous explosion in our standard of living that started over two hundred years ago. She systematically works through every attempted explanation and effectively eviscerates them. Issues of the day seem small when put in the perspective of a sixteen-fold growth in our standard of living (conservatively measured), and the utter inability of theorists to explain this phenomena is humbling.

For our purposes here we focus on Chapter 38: “The Cause Was Not Science”.

We must be careful when throwing around words like science, as it means many things to many people. What McCloskey is referring to is the stuff that generally gets grouped into the Scientific Revolution; the high theory traded by the Republic of Letters.

The jet engine example I mentioned earlier is exactly the sort of thing McCloskey has in mind. Take another example, from the book:

“Cheap steel,” for example, is not a scientific case in point. True, as Mokyr points out, it was only fully realized that steel is intermediate between cast and wrought iron in its carbon content early in the nineteenth century, since (after all) the very idea of an “element” such as carbon was ill-formed until then. Mokyr claims that without such scientific knowledge, “the advances in steelmaking are hard to imagine.” I think not. Tunzelmann notes that even in the late nineteenth century “breakthroughs such as that by Bessemer in steel were published in scientific journals but were largely the result of practical tinkering.”” My own early work on the iron and steel industry came to the same conclusion. Such an apparently straightforward matter as the chemistry of the blast furnace was not entirely understood until well into the twentieth century, and yet the costs of iron and steel had fallen and fallen for a century and a half.

This story plays out over and over again–the hard work of material progress is done by practitioners, but every assumes that credit belongs to the theorists.

It turns out that it isn’t even safe to make assumptions about those industries where theory seems, from the outside, to really dominate practice. What could be more driven by economic and financial theory than options trading? Surely this must be a case more in line with traditional understandings of the relationship between theory and practice.

And yet Nassim Taleb and Espen Gaarden Haug have documented how options traders do not use the output of theorists at all, but instead have a set of practices developed over time through trial and error.

Back to McCloskey:

The economic heft of the late-nineteenth-century innovations that did not depend at all on science (such as cheap steel) was great: mass-produced concrete, for example, then reinforced concrete (combined with that cheap steel); air brakes on trains, making mile-long trains possible (though the science-dependent telegraph was useful to keep them from running into each other); the improvements in engines to pull the trains; the military organization to maintain schedules (again so that the trains would not run into each other: it was a capital-saving organizational innovation, making doubletracking unnecessary); elevators to make possible the tall reinforced concrete buildings (although again science-based electric motors were better than having a steam engine in every building;  but the “science” in electric motors was hardly more than noting the connection in 1820 between electricity and magnetism-one didn t require Maxwell’s equations to make a dynamo); better “tin” cans (more electricity); asset markets in which risk could be assumed and shed; faster rolling mills; the linotype machine; cheap paper; and on and on and on. Mokyr agrees: “It seems likely that in the past 150 years the majority of important inventions, from steel converters to cancer chemotherapy, from food canning to aspartame, have been used long before people understood why they worked…. The proportion of such inventions is declining, but it remains high today.”

In 1900 the parts of the economy that used science to improve products and processes-electrical and chemical engineering, chiefly, and even these sometimes using science pretty crudely-were quite small, reckoned in value of output or numbers of employees. And yet in the technologically feverish U.K. in the eight decades (plus a year) from 1820 to 1900, real income per head grew by a factor of 2.63, and in the next eight “scientific” decades only a little faster, by a factor of 2.88. The result was a rise from 1820 to 1980 of a factor of (2.63) • (2.88) = 7.57. That is to say-since 2.63 is quite close to 2.88-nearly half of the world-making change down to 1980 was achieved before 1900, in effect before science. This is not to deny science its economic heft after science: the per capita factor of growth in the U.K. during the merely twenty years 1980 to 1999 was fully 1.53, which would correspond to an eighty-year factor of an astounding 5.5. The results are similar for the United States, though as one might expect at a still more feverish pace: a factor of 3.25 in per capita real income from 1820 to 1900, 4.54 from 1900 to 1980, and about the same frenzy of invention and innovation and clever business plans as Britain after 1980.

Note that McCloskey is not saying that science hasn’t made any contribution at all, or that the contribution is small. Taleb does not make that claim either. What is at issue here is that the contribution of science to our material well being is not just overblown, but overblown by several orders of magnitude. McCloskey ultimately concludes that “We would be enormously richer now than in 1700 even without science.”

Yet They Are Everywhere in Chains

Alex Tabarrok thinks to road to the innovation renaissance is through focusing such funding on STEM majors and tailoring our patent system so it only provides protection for industries like pharmaceuticals where it appears to make the biggest positive difference. Even Michele Boldrin and David Levines, who otherwise believe in abolishing intellectual property entirely, agree with Tabarrok’s exception. And Tyler Cowen believes that part of what we need to do in order to climb out of the Great Stagnation is elevate the status of science and scientists.

With respect to these distinguished gentlemen, I disagree. The road to greater prosperity lies in breaking the shackles we have increasingly put around practitioners, and elevating their work, and their status.

Whether or not the specific skills implied by a STEM career contribute to progress, it is quite clear that what is taught in the classroom is unlikely to be what is practiced in the field–since the teaching is done by teachers, who are not as a general rule practitioners. And let us return to Scranton, McCloskey, and Taleb: the vast majority of our material wealth came from tinkering that is decidedly non-STEM.

If you want to make progress in pharmaceuticals, don’t do it by enforcing (or worse, expanding) patents, which inhibit trial and error by those who do not hold the patent. Instead, remove the enormous impediments we have put up to experimentation. The FDA approval process imposes gigantic costs on drug development, including the cost of delaying when a drug comes to market and greatly reducing the number of drugs that can be developed. There is an entire agency whose sole purpose is to regulate medical trials.

It is all futile–as I have said before, in the end, the general market becomes the guinea pigs for many years after the drug is available, and no conceivable approval process can change that fact. But if you think differently–if you think theorists can identify what treatments are likely to succeed ahead of time, and are capable of designing experiments that will detect any serious side-effects, then our current setup makes a lot of sense.

But that is not the reality. Nassim Taleb argued in his latest book that we should avoid treating people who are mostly healthy, because of the possibility of unknown complications. On the other hand, we should take way more risks with people who are dangerously ill than our current system allows.

The trend is going the other way. Because we have made developing drugs so expensive, it is much more profitable to try to come up with the next advil, that will be used to ease symptoms of a mild disease but purchased by a very wide market, than a cure for rarer but more deadly diseases. And it doesn’t matter what they try to do, because the ultimate use of a drug is discovered through practice, not through theory. But it does matter, in the sense that we’re currently wasting many rounds of trial and error taking putting people at risk to attempt to make small gains.

Thalidomide remains the iconic example of how this works. It was marketed as an anti-nausea drug but caused birth defects when pregnant women took it. Yet it is widely used today, for treating far more serious  problems than nausea.

You Cannot Banish Risk

Aside from overestimating the abilities of theorists, the reason the discovery process of practitioners has been so hamstrung is because people are afraid of the errors inevitable in a process of trial and error. Thalidomide babies were an error, a horrible one. But there is no process, no theory that will allow us to avoid unforeseen mistakes. The only path to the drug that cures cancer or AIDS or malaria is one that involves people being hurt by unforeseen consequences. As Neal Stephenson put it, some people have to take a lot of risks in order to reduce the long run risk for all of us.

And along with the unforeseen harms, there are unforeseen gains as well. Penicillin, arguably the single greatest advancement in medicine in the 20th century, was an entirely serendipitous discovery.

I do not know if the stories of a great stagnation are accurate, but I agree with Peter Thiel that our regulatory hostility towards risk taking impoverishes us all, and allows many avoidable deaths every year.

The only way to start pushing the technological frontier again like we did at the peak of the Industrial Revolution is to empower the practitioners rather than impair them.

Unleash the practitioners and progress will follow.

Published by

Adam Gurri

Adam Gurri works in digital advertising and writes for pleasure on his spare time. His present research focuses on the ethics of business and work, from the perspective of virtue and human flourishing.

16 thoughts on “Unleash the Practitioners”

  1. Your first blog should have been phronesis-pundit. Lots of interesting things surrounding this word/idea of practical wisdom. It pops up all over the place in discussion about the rule of law. Especially if we contrast rules vs standards. Rules are top-down controls with no discretion for individual judges or officials; standards on the other hand invite bottom-up control so that the exercise of discretion occurs where there is actual on the ground knowledge.

    Some people who focus on it in the literature: Larry Solum focuses on practical wisdom as the key virtue for judges; De Tocqueville doesn’t use the term but it is what makes him optimistic about the American legal system of the 1830s — that most of the work in law is private with lawyers (then not now) having a deep connection to the practical issues and desires of their clients; William Simon laments the change in the welfare system that took place in the 1960s wherein case-worker discretion was replaced by by-the-book rules and what he calls the proletaranization of the social worker (transformation from a case-worker to a mere clerk effectively).

    And lurking in the background are the broader and compatible ideas of Hayek (law = norms and standards from practical experience VS legislation = rules laid down by “expert” politician); Lon Fuller (ROL demands respect for the dignity and agency of the individual); and others.

  2. I generally agree with the sweep of this post, but perhaps you go too far at the end. The reason why tinkering was paramount for the last couple centuries is that they were dealing with systems that were incredibly complex: physics, chemistry, medicine, ie the human body. To the extent we’re still pushing forward in those fields, obviously tinkering is going to be key. But the great “technological frontier” today consists of the twin technologies of the computer and the network, two machines built of human ingenuity and theory and where we need a lot more thinking and less tinkering.

    Perhaps I’m asking for too much, as most progress in these new fields unfortunately still seems to come much more from serendipitous tinkering than anything else. But I think there’s much more scope for “theory” there, even if it mostly isn’t being done. That will change. In any case, the ideal is a complex interplay between theory and practice and that’s likely what we’ll see going forward across the board, which probably means we currently need a lot more tinkering in the non-computer/internet fields, as the “theorists” are probably ascendant now. But that doesn’t mean there aren’t fields where theory should be much more integral.

    1. I think we see eye to eye here. My point isn’t that theory-based progress doesn’t exist, but merely that it is perceived more extensively than it actually is. McCloskey talks about several examples of it in the book–I think she attributes the telegraph to it, for instance–and even goes so far as to say that it has become much more important, relatively speaking, in recent times. I think that that’s partly because we have constrained more and more on the tinkering side, but it’s also certainly partly because the theory-driven side has had more and more to contribute.

      But figuring out which field should have theory at their center is itself something that requires tinkering in order to discover, if that makes any sense. It’s not clear a priori when theory is going to have more to offer.

  3. I’m confused about the relevance of the Dawkins quote at the beginning of this post. Are you arguing that there are post-modernists at 30,000ft (that nature is subjective) because theory doesn’t necessarily precede application?

    I agree experimentalists and lab workers need to be put on a higher pedestal– their work is insightful, fundamental, repetitive, extremely sensitive to errors. But again, I do not see how that implies we should deregulate drug companies. The use of a drug is indeed first hypothesized (by application of some theory or adapted from knowledge of a related drug), but the ultimate use of a drug is first tested in controlled trials on (ideally) willing patients– surely not in practice on the general public!! My god, how irresponsible to use drugs on people with no justification for an idea. If anything, deregulation of pharma would further weaken the reliability of industry-funded trials.

    If you are claiming trials are a form of practice, then you should be arguing for better regulated studies to ween out unexpected behavior from complicated data. A good book “Bad Pharma” by Goldachre just came out outlining the why companies should not be given free reign over drug regulations. It’s pop-sci — so quite manageable to read.

    1. Hey Lukas, thanks for the feedback.

      The whole premise of this post is that we’re not very good at a priori, theoretical knowledge. I understood Dawkins’ quote, at the time and in the context I read it, to be claiming that the airplane was an example of how theoretical knowledge translated into something solid that we all relied on, even postmodernists. Perhaps I misunderstood him?

      The use of a drug may be “first hypothesized”, but its ultimate application is discovered through little more than rote trial and error, bearing little relationship to the original hypothesis.

      What I’m stating is that the situation is _already_ one in which the general public ends up being the test subject, because there is a limit to how large a sample pharmaceuticals can afford to test on, even over a decade long process. And in the end the samples are not very random in their selection.

      The whole argument, in other words, is that there *is* no such thing as a better study; we don’t end up knowing how a drug will work until we try it, and on a lot of people. And it would be better if we dropped the huge costly barriers to developing and releasing drugs, and instead concentrated on, say, making it so that patients who want to try a drug that has been out for less than ten years have to sign paperwork a mile long on how they understand that this is a risky thing the effects of which are uncertain etc etc.

      All we’re doing now is slowing down the process of discovery and allowing a lot of people to die who shouldn’t have, while not really protecting anyone from undiscovered side-effects because we’re not very good at finding them anyway.

      1. Thanks for the calm reply to my less-than-calm comment. The Dawkins quote is from River out of Eden but he has used it many times since the 90’s:

        “Show me a cultural relativist at 30,000 feet and I’ll show you a hypocrite … If you are flying to an international congress of anthropologists or literary critics, the reason you will probably get there – the reason you don’t plummet into a ploughed field – is that a lot of Western scientifically trained engineers have got their sum right.”

        I don’t interpret this as a comment on development of airplanes by practitioners (engineers and mechanics) vs theoretical physicists. It is a comment that the world has a certain level of objectivity commonly disregarded by relativists. There is something about science and engineering that transcends the culture that developed it allowing us to objectively gain and apply knowledge.

        Turning towards medicine, I agree that our theoretical base of medical knowledge doesn’t allow us accurately predict the exact behavior of a compound. And for that reason, new drugs undergo extensive study in the lab (first on numerous animals, then in low doses for humans, and then large doses in humans). These individuals are ideally monitored very closely and their symptoms tracked. This is the rote trial and error part of science. These are the places we find most side-effects. If the drug makes it to market, long-term studies can be done. My claim is if you give get rid of regulations, these “tests” will not be done on unmonitored subjects where detection cannot be done. It is one thing to realize “black-swans” exist, it is another the close your eyes and make all swans black.

        In 2006, a drug named TGN1412 was tested. It was a new compound (not a variant on one we knew something about) but one predicted to bind in a particular way to white blood cells. Six patients were given the trial drug at the same time and within an hour all of them developed severe side-effects from extreme fever to loss of memory and eventually to loss of respiratory function. So when I hear you say “we don’t end up knowing how a drug will work until we try it, and on a lot of people” I say sometimes testing on one is enough. (And for the record, the compound did bind as theoretically predicted, it just had unforeseen consequences).

        1. Hey Lukas,

          No worries. As a frequent less-than-calm-comment offender, I try not to take it personally 🙂

          My bad on the Dawkins quote. I honestly had understood it in the way I dsecribed—and when I learned about how the jet engine was invented before we understood how it work, I thought immediately of the quote and was struck by how this new information was at odds with how I had understood the quote (and agreed with that particular understanding at the time).

          I think there’s a reasonable middle between your argument and mine on pharma development that we would both be happier with than the current setup. While we definitely want the small scale, highly monitored tests to catch the TGN1412’s quickly, we also don’t want to stop lifesaving drugs from reaching the market by putting a 12 year process between it and us, and we don’t want to make drug development so expensive that it’s unprofitable to make lifesaving drugs and only profitable to make the next symptom-easer. That’s the current reality.

Leave a Reply

Your email address will not be published. Required fields are marked *