Unleash the Practitioners

Richard Dawkins is famously optimistic about human knowledge, especially within the confines of science. He is–understandably–allergic to the brand of postmodernist who believes that reality is simply a matter of interpretation, or cultural narrative. He has a much repeated one-liner that comes off as quite devastating–“There are no postmodernists at 30,000 feet.

It’s quite convincing. Engineers were able to make airplanes because of knowledge that was hard-won by the scientific community. The latter developed and tested theories, which the former could then put to use in order to get us moving about in the air at 30,000 feet. Right?

Wrong.

Historian Philip Scranton has done extensive work demonstrating that the original developers of the jet engine had no idea of the theory behind it, which was only developed after the fact. The jet engine was arrived at through tinkering and rote trial and error.

Dawkins was correct that there is a hard reality that is undeniable, and led to many failed prototypes. But the background story of science that he subscribes to is simply incorrect in this instance. Scientists didn’t develop theory which practitioners could apply; the practitioners invented something that scientists then felt the need to explain.

What’s amazing is how often this turns out to be the case, once you start digging.

Practitioners Elevated Us to New Heights

If there is one book that should be mandatory reading for every student of history, it is Deirdre McCloskey’s Bourgeois Dignity. It lays out in stark fashion just how little we know about what caused the enormous explosion in our standard of living that started over two hundred years ago. She systematically works through every attempted explanation and effectively eviscerates them. Issues of the day seem small when put in the perspective of a sixteen-fold growth in our standard of living (conservatively measured), and the utter inability of theorists to explain this phenomena is humbling.

For our purposes here we focus on Chapter 38: “The Cause Was Not Science”.

We must be careful when throwing around words like science, as it means many things to many people. What McCloskey is referring to is the stuff that generally gets grouped into the Scientific Revolution; the high theory traded by the Republic of Letters.

The jet engine example I mentioned earlier is exactly the sort of thing McCloskey has in mind. Take another example, from the book:

“Cheap steel,” for example, is not a scientific case in point. True, as Mokyr points out, it was only fully realized that steel is intermediate between cast and wrought iron in its carbon content early in the nineteenth century, since (after all) the very idea of an “element” such as carbon was ill-formed until then. Mokyr claims that without such scientific knowledge, “the advances in steelmaking are hard to imagine.” I think not. Tunzelmann notes that even in the late nineteenth century “breakthroughs such as that by Bessemer in steel were published in scientific journals but were largely the result of practical tinkering.”” My own early work on the iron and steel industry came to the same conclusion. Such an apparently straightforward matter as the chemistry of the blast furnace was not entirely understood until well into the twentieth century, and yet the costs of iron and steel had fallen and fallen for a century and a half.

This story plays out over and over again–the hard work of material progress is done by practitioners, but every assumes that credit belongs to the theorists.

It turns out that it isn’t even safe to make assumptions about those industries where theory seems, from the outside, to really dominate practice. What could be more driven by economic and financial theory than options trading? Surely this must be a case more in line with traditional understandings of the relationship between theory and practice.

And yet Nassim Taleb and Espen Gaarden Haug have documented how options traders do not use the output of theorists at all, but instead have a set of practices developed over time through trial and error.

Back to McCloskey:

The economic heft of the late-nineteenth-century innovations that did not depend at all on science (such as cheap steel) was great: mass-produced concrete, for example, then reinforced concrete (combined with that cheap steel); air brakes on trains, making mile-long trains possible (though the science-dependent telegraph was useful to keep them from running into each other); the improvements in engines to pull the trains; the military organization to maintain schedules (again so that the trains would not run into each other: it was a capital-saving organizational innovation, making doubletracking unnecessary); elevators to make possible the tall reinforced concrete buildings (although again science-based electric motors were better than having a steam engine in every building;  but the “science” in electric motors was hardly more than noting the connection in 1820 between electricity and magnetism-one didn t require Maxwell’s equations to make a dynamo); better “tin” cans (more electricity); asset markets in which risk could be assumed and shed; faster rolling mills; the linotype machine; cheap paper; and on and on and on. Mokyr agrees: “It seems likely that in the past 150 years the majority of important inventions, from steel converters to cancer chemotherapy, from food canning to aspartame, have been used long before people understood why they worked…. The proportion of such inventions is declining, but it remains high today.”

In 1900 the parts of the economy that used science to improve products and processes-electrical and chemical engineering, chiefly, and even these sometimes using science pretty crudely-were quite small, reckoned in value of output or numbers of employees. And yet in the technologically feverish U.K. in the eight decades (plus a year) from 1820 to 1900, real income per head grew by a factor of 2.63, and in the next eight “scientific” decades only a little faster, by a factor of 2.88. The result was a rise from 1820 to 1980 of a factor of (2.63) • (2.88) = 7.57. That is to say-since 2.63 is quite close to 2.88-nearly half of the world-making change down to 1980 was achieved before 1900, in effect before science. This is not to deny science its economic heft after science: the per capita factor of growth in the U.K. during the merely twenty years 1980 to 1999 was fully 1.53, which would correspond to an eighty-year factor of an astounding 5.5. The results are similar for the United States, though as one might expect at a still more feverish pace: a factor of 3.25 in per capita real income from 1820 to 1900, 4.54 from 1900 to 1980, and about the same frenzy of invention and innovation and clever business plans as Britain after 1980.

Note that McCloskey is not saying that science hasn’t made any contribution at all, or that the contribution is small. Taleb does not make that claim either. What is at issue here is that the contribution of science to our material well being is not just overblown, but overblown by several orders of magnitude. McCloskey ultimately concludes that “We would be enormously richer now than in 1700 even without science.”

Yet They Are Everywhere in Chains

Alex Tabarrok thinks to road to the innovation renaissance is through focusing such funding on STEM majors and tailoring our patent system so it only provides protection for industries like pharmaceuticals where it appears to make the biggest positive difference. Even Michele Boldrin and David Levines, who otherwise believe in abolishing intellectual property entirely, agree with Tabarrok’s exception. And Tyler Cowen believes that part of what we need to do in order to climb out of the Great Stagnation is elevate the status of science and scientists.

With respect to these distinguished gentlemen, I disagree. The road to greater prosperity lies in breaking the shackles we have increasingly put around practitioners, and elevating their work, and their status.

Whether or not the specific skills implied by a STEM career contribute to progress, it is quite clear that what is taught in the classroom is unlikely to be what is practiced in the field–since the teaching is done by teachers, who are not as a general rule practitioners. And let us return to Scranton, McCloskey, and Taleb: the vast majority of our material wealth came from tinkering that is decidedly non-STEM.

If you want to make progress in pharmaceuticals, don’t do it by enforcing (or worse, expanding) patents, which inhibit trial and error by those who do not hold the patent. Instead, remove the enormous impediments we have put up to experimentation. The FDA approval process imposes gigantic costs on drug development, including the cost of delaying when a drug comes to market and greatly reducing the number of drugs that can be developed. There is an entire agency whose sole purpose is to regulate medical trials.

It is all futile–as I have said before, in the end, the general market becomes the guinea pigs for many years after the drug is available, and no conceivable approval process can change that fact. But if you think differently–if you think theorists can identify what treatments are likely to succeed ahead of time, and are capable of designing experiments that will detect any serious side-effects, then our current setup makes a lot of sense.

But that is not the reality. Nassim Taleb argued in his latest book that we should avoid treating people who are mostly healthy, because of the possibility of unknown complications. On the other hand, we should take way more risks with people who are dangerously ill than our current system allows.

The trend is going the other way. Because we have made developing drugs so expensive, it is much more profitable to try to come up with the next advil, that will be used to ease symptoms of a mild disease but purchased by a very wide market, than a cure for rarer but more deadly diseases. And it doesn’t matter what they try to do, because the ultimate use of a drug is discovered through practice, not through theory. But it does matter, in the sense that we’re currently wasting many rounds of trial and error taking putting people at risk to attempt to make small gains.

Thalidomide remains the iconic example of how this works. It was marketed as an anti-nausea drug but caused birth defects when pregnant women took it. Yet it is widely used today, for treating far more serious  problems than nausea.

You Cannot Banish Risk

Aside from overestimating the abilities of theorists, the reason the discovery process of practitioners has been so hamstrung is because people are afraid of the errors inevitable in a process of trial and error. Thalidomide babies were an error, a horrible one. But there is no process, no theory that will allow us to avoid unforeseen mistakes. The only path to the drug that cures cancer or AIDS or malaria is one that involves people being hurt by unforeseen consequences. As Neal Stephenson put it, some people have to take a lot of risks in order to reduce the long run risk for all of us.

And along with the unforeseen harms, there are unforeseen gains as well. Penicillin, arguably the single greatest advancement in medicine in the 20th century, was an entirely serendipitous discovery.

I do not know if the stories of a great stagnation are accurate, but I agree with Peter Thiel that our regulatory hostility towards risk taking impoverishes us all, and allows many avoidable deaths every year.

The only way to start pushing the technological frontier again like we did at the peak of the Industrial Revolution is to empower the practitioners rather than impair them.

Unleash the practitioners and progress will follow.

Published by

Adam Gurri

Adam Gurri works in digital advertising and writes for pleasure on his spare time.