The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day by Hand David J

The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day by Hand David J

Author:Hand, David J. [Hand, David J.]
Language: eng
Format: mobi, epub
Publisher: Farrar, Straus and Giroux
Published: 2014-02-11T00:00:00+00:00


Selection Bias in Science

The law of selection manifests itself in science in what is called “selection bias,” which I mentioned in chapter 2. For example, in the late eighteenth century William Withering discovered that the plant foxglove was effective in alleviating what was then called dropsy, describing it in An Account of the Foxglove and Some of Its Medical Uses. He wrote, “It would have been an easy task to have given select cases, whose successful treatment would have spoken strongly in favour of the medicine, and perhaps been flattering to my own reputation. But Truth and Science would condemn the procedure. I have therefore mentioned every case in which I have prescribed the Foxglove, proper or improper, successful or otherwise.”9 He understood how selecting cases could be misleading, and was at pains to avoid the error.

In my book Information Generation: How Data Rule Our World,10 I described several incidents in which very well-known figures in the history of science appear to have selectively chosen their results to support preconceived ideas. These figures include Louis Pasteur, who discovered that most infectious diseases were caused by microorganisms, and Robert Millikan, who measured the charge on the electron. Millikan was very explicit about having selected his data: “I would have discarded [the lower-quality results] had they not agreed with the results of the other observations.”11 The fact that he discussed this suggests he might have appreciated the dangers.

If selecting from the results is one way of distorting conclusions, another is to decide what hypothesis you are testing after you’ve carried out the experiment and collected the data. This procedure has been called “harking,” an acronym for hypothesizing after the results are known. It’s clear that if you do this then you can easily come up with hypotheses which are supported by the data! Put like that, the dangers probably seem obvious, but the effect typically manifests itself in much more subtle ways. For example, researchers might sift through the data, observe a hint of a trend in a particular direction, and then carry out a more elaborate statistical analysis and test of the same data to see if the trend is significant. But any conclusion will be distorted by the initial observation of the hint of a trend.

Another version of selection bias which has attracted a great deal of attention also came up in chapter 2: publication bias. This is the tendency for scientific journals to preferentially publish studies which show a phenomenon rather than those which fail to show the phenomenon. It’s also sometimes called “the file drawer effect,” describing the fact that the unpublished studies will end up confined to the file drawer and never written up as papers appearing in the scientific literature.

It makes perfect sense. Studies which conclude a drug is effective are intrinsically more exciting than studies which conclude that the drug does not have an effect. So authors will be less inclined to submit papers describing results of the latter sort than the former, and editors will be more likely to accept the former sort for publication.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.