Last winter the Canadian Medical Association Journal published a fascinating article by Ken Myers discussing the (as-yet unexamined) benefits of cigarette smoking on endurance running performance. Ken is a friend and elite distance runner (we used to literally run with the same crowd while I was doing my undergrad in Calgary) so I was very excited and a bit confused when I saw his article. Could smoking really be beneficial for distance runners like myself?
In the discussion, Ken goes on to point out that:
Cigarette smoking has been shown to increase serum hemoglobin, increase total lung capacity and stimulate weight loss, factors that all contribute to enhanced performance in endurance sports. Despite this scientific evidence, the prevalence of smoking in elite athletes is actually many times lower than in the general population. The reasons for this are unclear; however, there has been little to no effort made on the part of national governing bodies to encourage smoking among athletes.
Now at this point I assume that people are wondering how something this insane came to be published in a respected medical journal (as of 2010, CMAJ was ranked 9th of out 40 medical journals, with an impact factor of 9). The answer, of course, is that the point of Ken’s article was to illustrate how you can fashion a review article to support almost any crazy theory if you’re willing to cherry-pick the right data. Here is the paper’s abstract:
The review paper is a staple of medical literature and, when well executed by an expert in the field, can provide a summary of literature that generates useful recommendations and new conceptualizations of a topic. However, if research results are selectively chosen, a review has the potential to create a convincing argument for a faulty hypothesis. Improper correlation or extrapolation of data can result in dangerously flawed conclusions. The following paper seeks to illustrate this point, using existing research to argue the hypothesis that cigarette smoking enhances endurance performance and should be incorporated into high-level training programs.
For example, if I were to argue that “Intervention X” influences body fat distribution and pulled together a few mechanistic resources supporting my arguments, it would be very difficult for an educated lay-person to know if my arguments were sound or not. Which unfortunately is the situation almost all of us are in, anytime we read anything that is even slightly outside of our own area of research.
Even with systematic reviews, which are the highest form of scientific evidence, there is still a lot of room for subjectivity. You can develop a systematic review in a way that makes it more or less likely that you will find a certain outcome, just as you could with an individual study. Not only that, but the review depends on the objectivity of the people screening articles, who could (intentionally or accidentally) systematically include or exclude articles that may have an impact on the review’s ultimate conclusions. And then of course the authors have to synthesize data and come to conclusions, both of which are mostly subjective activities.
That doesn’t mean that there is always a nefarious intent on the part of researchers – I would argue that there almost never is. But take the phenomenon of “White Hat Bias“, where researchers distort “information in the service of what may be perceived to be righteous ends”. And even the most objective and ethical researcher is still going to be looking at data through their own world-view, which may cause them to miss something that is in the data, or to “see” something that isn’t really there.
The point being that whether you’re reading a blog post or a systematic review paper in a prestigious medical journal, you really do need to be skeptical at all times.
Very interesting discussion on one of the ways where the Scientific Research can become non-scientific at all: selective picking to create an impression that the final conclusion is right.