When to Medicate

“We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.”

-Nassim Taleb, Antifragility

In an uncertain world, Taleb wants us to stop thinking we know the probabilities and instead think more seriously about payoffs.

Let’s say a new pill comes to market that claims to be able to cure the common cold, quickly and with minimal side-effects. What is the potential payoff from taking this pill? At best, you will end your cold more quickly than you otherwise would have. And at worse?

You may be tempted to say that the downside risk is not very large, as the pill had to go through a test process with the company that developed it, examined by the FDA. The process can take years–surely any problems would have been detected by its completion, right?

Uncertainty and Complexity

Wrong–any test is always going to have limits, by necessity. It might involve only one, two, or three thousand test subjects–whose selection is not truly random. Even if we could treat the statistical results with complete confidence, any effect that only occurs in a tiny fraction of this sample would impact a large number of people once it hits a market of millions. And any effect that doesn’t really visibly show up until a time period longer than the approval process will be missed entirely.

The bottom line is that the general patient population ends up being guinea pigs sooner or later, and there is no avoiding it. It’s for this reason that Robin Hanson always advises his students to avoid the “cutting edge” medical treatments in favor of those that have been tested by time. Treatments that have been around for 50 or 100 years are much less likely to have undetected risks than treatments that are 20, 10, or 5 years old–or worst of all, brand new.

Every new treatment has a large, unknown downside risk of undetected side-effects. Moreover, every new treatment has a similarly large, unknown downside risk of interaction with other treatments already on the market. Even if the testing process turns out to have revealed every possible side-effect, it is literally impossible for it to have detected every possible interaction–consider that some interactions will end up being with treatments that didn’t exist at the time of testing!

What is there to Gain?

Taleb’s point isn’t sophistry. Consider the most famous case of undetected harm in the 20th Century–Thalidomide. I had known that after Thalidomide made it to market, it caused a rash of birth defects. What I hadn’t realized was that it was being used to treat morning sickness.

So in the best case scenario, the women taking Thalidomide would have had their nausea pass more quickly and be otherwise unchanged. But the worse case scenario was clearly unknown, as history proved. The question you have to ask yourself when you’re receiving some treatment today is whether what you’re being treated for is worth the risk of unwittingly stumbling upon the next Thalidomide.

If it’s something that our body is capable of dealing with on its own, Taleb’s advice is to forego treatment entirely. When the potential payoff is so small, errors on the part of the medical establishment will only hurt us.

This doesn’t mean that we should become anti-medicine. Instead, we should focus on extreme cases, and be willing to take more risks in those cases than our current regulatory and cultural environment allows. Taleb:

And there is a simple statistical reason that explains why we have not been able to find drugs that make us feel unconditionally better when we are well (or unconditionally stronger, etc.): nature would have been likely to find this magic pill by itself. But consider that illness is rare, and the more ill the person the less likely nature would have found the solution by itself, in an accelerating way. A condition that is, say, three units of deviation away from the norm is more than three hundred times rarer than normal; an illness that is five units of deviation from the norm is more than a million times rarer!

If we focus on those cases that were not likely to have emerged in a significant way during the process of natural selection that brought us to where we are today, we minimize the amount of downside risk from unforeseen side-effects that we’re exposing ourselves to, and we’re maximizing the potential gains of treatment.

Thus, the answer is not to increase regulation of the pharmaceutical industry or expand the FDA approval process. The latter is already so long that it allows lives to be lost while life-saving drugs take forever to come to market.

The Impulse to Intervene

The answer isn’t to just take what your doctor tells you at face value, either.

If 9 times out of 10, or 9.99 times out of 10, your doctor should tell you not to be treated in any manner, that is unfortunately not likely to be what you hear when you arrive for your appointment.

Doctors are simply more likely to want to do something rather than nothing. Consider the following, again from Taleb:

Consider this need to “do something” through an illustrative example. In the 1930s, 389 children were presented to New York City doctors; 174 of them were recommended tonsillectomies. The remaining 215 children were again presented to doctors, and 99 were said to need the surgery. When the remaining 116 children were presented to yet a third set of doctors, 52 were recommended the surgery. Note that there is morbidity in 2 to 4 percent of the cases (today, not then, as the risks of surgery were very bad at the time) and that a death occurs in about every 15,000 such operations and you get an idea about the break-even point between medical gains and detriment.

In other words, doctors were likely to advise a similar proportion of whatever group they were presented to get the surgery–despite the fact that other doctors had already lumped them into the group that didn’t need treatment!

Moreover, this problem is not confined to doctors in the 1930s. Consider how doctors and hospitals have responded to the scientific consensus that mammograms do not save lives on net.

For years now, doctors like myself have known that screening mammography doesn’t save lives, or else saves so few that the harms far outweigh the benefits. Neither I nor my colleagues have a crystal ball, and we are not smarter than others who have looked at this issue. We simply read the results of the many mammography trials that have been conducted over the years. But the trial results were unpopular and did not fit with a broadly accepted ideology—early detection—which has, ironically, failed (ovarian, prostate cancer) as often as it has succeeded (cervical cancer, perhaps colon cancer).

More bluntly, the trial results threatened a mammogram economy, a marketplace sustained by invasive therapies to vanquish microscopic clumps of questionable threat, and by an endless parade of procedures and pictures to investigate the falsely positive results that more than half of women endure. And inexplicably, since the publication of these trial results challenging the value of screening mammograms, hundreds of millions of public dollars have been dedicated to ensuring mammogram access, and the test has become a war cry for cancer advocacy. Why? Because experience deludes: radiologists diagnose, surgeons cut, pathologists examine, oncologists treat, and women survive.

In short, it is uncertain how deadly the cancers that mammograms detect early are, but it is certain that the invasive tactics required to combat such cancers put the patient at risk. The study that the above article begins with describes how the rise in mammograms has not resulted in a drop in the late-stage, definitely dangerous form of breast cancer.

There are any number of possible stories you can tell about why doctors will opt to do something rather than nothing, even when every intervention–needless or needed–carries the risk of iatrogenesis.

A Robin Hanson-style story (PDF) would go as follows: doctors are simply meeting a market demand. People are not really looking for what is medically best for them when they make an appointment, any more than consumers of news are trying to become more informed. What patients want is comfort–the comfort of someone who knows what they’re doing, taking charge of the decisions regarding our health. And few people take comfort in being told to do nothing–even if it’s the wisest choice. So the market produces doctors that satisfy the demand for comfort, rather than the demand for the best possible health outcomes.

The story subscribed to by Taleb and by the doctor quoted above is even more straightforward–more money is spent on intervention that non-intervention, so the incentives are clear. I’m not so sure about this one, as the doctors performing the diagnosis aren’t usually the ones who get paid for the procedure.

But the story doesn’t matter. The phenomenon of being too intervening too often is well documented, whatever the reason it occurs.

If what you’re interested in is your health, rather than comforting answers from a credentialed expert, then Taleb’s argument is worth considering. Do you really need to receive treatment for a bug that you’ll work through eventually, or for baldness, or for nausea that was always going to be temporary?

Why risk losing everything when you have so little to gain?

Another way to view it: the iatrogenics is in the patient, not in the treatment. If the patient is close to death, all speculative treatments should be encouraged— no holds barred. Conversely, if the patient is near healthy, then Mother Nature should be the doctor.

Published by

Adam Gurri

Adam Gurri works in digital advertising and writes for pleasure on his spare time.