Despite forays into editing, consulting, and teaching, I still consider myself first and foremost a clinician. And, at that, I like to think of myself as an evidence-based clinician. For me this appreciation of the importance of research in my day-to-day clinical decision-making did not come easy. My entry-level degree in physical therapy strongly emphasized authority-based knowledge and clinician expertise. I considered courses in research methodology and statistics as curriculum fillers and mere lip service to the supposed scientific foundation of my new profession. But who could blame me for this negative attitude some 20 years ago when the research basis for physical therapy was very meager indeed? And even if there was any relevant research, it certainly was not introduced to me during my early studies.
I continued on with graduate studies in the area of orthopaedic manual therapy (OMT) and by then—much to the credit of the physical therapy profession and, of course, of chiropractic, medicine, and osteopathy—the amount of relevant research had skyrocketed. However, the clinical value of research still continued to elude me. This was not because my professors were poor at conveying the central concepts and methods to me. And it also, although I would be the first to admit that I am certainly no statistical “wunderkind,” was not because I failed to understand research methodology or statistical procedures. It was the lack of concordance between what I saw in the clinic and what the research was telling me. Although I found that OMT was highly effective for a subset of patients, these admittedly anecdotal observations were not reflected in the randomized trials, systematic reviews and meta-analyses, or clinical practice guidelines I read.
With my by-now expanded knowledge of statistics and research methodology, I tried to make sense of this discrepancy between my clinical observations and the published research on OMT. Of course, there were the standard arguments of misrepresentation in the randomized trials of interventions called physical therapy and OMT that in no way reflected what we actually do in clinical practice. Then, with no one being quite sure what the effective component is in many OMT interventions, there was the use of inappropriate control group interventions that may have had unintended therapeutic effects, thereby washing out the effect of the experimental OMT interventions. Finally, there was the problem that the generally small sample sizes in OMT research predisposed these studies to type II error1,2.
With regard to systematic reviews, I questioned the comprehensiveness of the literature search; after all, a fair amount of research in this area is not published in the more easily accessible commonly used in such reviews3. Then there were the conflicting results based on the use of different methodological quality assessment tools despite nearly similar literature search results, oversampling of data that was published multiple times (perhaps initially as a pilot and later as a larger study), and—perhaps more noticeable to a non-native English speaker like me—the bias of including only English-language articles.
With regard to meta-analyses, there was the issue that statistical pooling is only possible if the included studies are sufficiently homogeneous. If the studies are heterogeneous with regard to clinical parameters (e.g., sample, intervention, outcome assessment) or methodology, then pooling is impossible. Yet, statistical procedures used to test for similarity of the studies included have limited power to detect heterogeneity when the reviews include only a few studies and these few studies again have only few subjects, a situation very common in OMT research4. And then there is the fact that systematic reviews and meta-analyses on intervention use randomized trials with their specific weaknesses as described above. No wonder there is such a high proportion of unreliable or poorly reported reviews being published5.