Statistics from Altmetric.com
In the pages of Evidence-Based Mental Health and elsewhere, authors have argued that it is ethically obligatory to practice evidence-based medicine (EBM).1–,3 The argument in favour of this position begins with the assumption that EBM is more able than other strategies to provide us with accurate information about the effectiveness of medical interventions. Accurate information is essential to making clinical recommendations that will be effective in optimising our patients’ health. Optimising patients’ health is a basic ethical duty for medical practitioners. Therefore, the most accurate information—delivered by EBM—is required for practitioners to fulfil a primary ethical obligation. Similarly, knowingly using less accurate information is less likely to optimise patients’ health and is therefore inherently unethical.
This ethical argument relies on an epistemological assumption—that EBM provides us with a more reliable way of knowing than pre-EBM medicine. To date, there is no body of evidence, although there are case examples, demonstrating that EBM is more able to generate accurate medical information than pre-EBM medicine. Furthermore, it is also uncertain whether EBM achieves its ultimate goal of improving patient health compared with previous modes of practice. In the absence of such evidence, EBM relies on this ethical argument to support—and indeed demand—its use.
Because EBM’s ethical rationale depends upon the correctness of its epistemological assumption, the assumption must be capable of withstanding scrutiny before one is ethically obliged to practice EBM. In EBM, as the name suggests, it is evidence upon which medical practice is based and it is evidence that will improve the accuracy and reliability of our knowledge. Thus, in order to test EBM’s epistemological assumption, it is essential to understand how EBM defines “evidence”.
In the authoritative accounts of EBM, “evidence” is never formally defined, but is implied to be quantitative data obtained from research studies—preferably randomised controlled trials or meta-analyses of these trials. Data generated by other research methods are less preferred or located lower down on the “evidence hierarchy”. Some consideration is given to non-quantitative research data, such as qualitative data, although the status and weight of such data remains unclear. EBM’s proponents also allege EBM’s increased reliability by portraying it as inherently neutral, a corrective measure against the biases inherent in any other form of reasoning in clinical practice. There is no mention of the social, political, and economic contexts in which EBM is practiced or any acknowledgement of how these contexts could influence the generation or dissemination of research data. A few examples suggest that this view of EBM as neutral may be unfounded.
“Source of funding bias” is apparent in psychiatric research when we consider the far more rapidly growing body of research on pharmaceuticals compared with psychosocial interventions. Large, private corporations with a commercial interest in pharmaceuticals are a source of funding for researching one type of intervention—medications—whereas there is no equivalent body to favour the funding of other types of psychiatric interventions such as psychosocial treatments. While fewer data do not mean that psychosocial interventions are ineffective, over time the gradual accumulation of research data concerning pharmaceuticals suggests greater evidence of their effectiveness compared with other types of interventions. The greater availability of research data concerning medications as compared to psychosocial interventions is also fostered by “technical bias”, which is actually built into EBM’s structure. Technical bias favours research that we already know how to do. The ethos of technical bias is aptly captured in step one of EBM’s five steps of EBM practice: converting the need for information into an “answerable” question.4 As Miettinen points out, there may well be intellectual and clinical value in considering unanswerable questions, or at least, difficult to answer questions.5 Furthermore, there may be a gap between what one needs to know, and the answers that the medical research literature can provide. Because EBM prefers certain research methods and certain types of data in its evidence hierarchy, EBM favours those interventions that can best be studied using EBM rules.6 For example, research studies of pharmaceuticals are more likely to be conducted according to EBM preferred methods and these methods will generate data that is ranked more highly by EBM (quantitative data from RCTs). Psychotherapy research, on the other hand, is difficult to conduct according to EBM rules—it can never even meet the basic EBM requirement of double blinding. Psychotherapy research methods are also fraught with methodological problems, particularly when compared with studies of pharmaceutical agents. As a result, psychotherapy research data will necessarily be considered tentative and less definitive than data generated by RCTs and meta-analyses of pharmaceuticals.
Source of funding bias and technical bias affect the generation of research data. Publication bias, in which certain types of data are knowingly kept from publication, distorts the dissemination of research data.7 Because of publication bias, the total pool of research data from which we draw conclusions about medical interventions cannot be relied upon because it is incomplete. The recent controversy over the use of SSRIs in childhood major depression demonstrates this bias.8 In this case, clinical trials which demonstrated no advantage of the active drugs as compared with placebo were not published and not released for professional scrutiny. Conclusions drawn about these drugs by the psychiatric community were made without being aware of these unpublished data.
In light of these examples, it is reasonable to conclude that EBM is not bias free. The social context in which EBM operates cannot be ignored, for this context influences the production of data. But in addition to ignoring potential sources of bias in producing data, EBM also promulgates a view that data alone will tell us which interventions are or are not effective. This view obscures the process by which data becomes evidence. Data do not support conclusions by themselves—they must first be interpreted. Interpretation is a process in which judgement is used to evaluate the relevance and weight of data. It is through this process that data are thought to support certain conclusions. However, interpretation is a human process that is necessarily influenced by a variety of factors including power relations, and commercial and other vested interests.9 The debate concerning SSRI use in childhood depression again illustrates this point. Now that some leading experts have had the opportunity to examine what we think is the total pool of data on this question, various interpretations of the data have arisen. Garland claims that the total data do not support the view that SSRIs are effective in childhood depression whereas Korenblum disagrees.10 This dispute highlights the fact that the same data may be interpreted differently depending on who is examining them, and what knowledge and background they bring to this exercise. Interpreters’ judgement may also be affected, consciously or unconsciously, by other factors such as economic concerns, peer pressure, and personal issues of ego and status.
It is important to note that biases and interpretation are not unique to EBM. All knowledge is affected, to different degrees, by these types of factors. However, EBM ignores these issues. This means that the assumption upon which EBM rests—that it is more likely to yield accurate information than any other method—cannot be taken for granted. Indeed, we have good reasons to believe that EBM has significant potential to produce inaccurate information. Because the assumption of greater accuracy does not necessarily hold, then the ethical obligation to practice EBM is also thrown into question.
Even if EBM has these limitations, might it be relatively less inaccurate than pre-EBM medicine? I do not believe it is possible to draw such a conclusion at present given that few of the health interventions that have had the greatest impact on morbidity and mortality have been derived from EBM. Measures such as clean water, handwashing, vaccination, and prenatal care have dramatically improved the health of people. Most of these interventions have arisen through both historical accident and a variety of intellectual developments, including those most derided by EBM, such as pathophysiological reasoning and clinical observation. EBM’s ability to optimise patients’ health remains to be seen. In the absence of epistemological justification, there is no ethical obligation to practice EBM.
Perhaps the best route to epistemological and ethical justification for EBM will begin with the recognition that all knowledge is tentative and subject to various distortions and error, including knowledge generated by EBM preferred methods. Eliminating distortion is probably impossible. This does not necessarily lead to a nihilistic conclusion that the pursuit of knowledge is pointless. Rather it suggests that what is needed in medical practice is an openness to a plurality of sources of knowledge. Employing the standards of transparency and explicitness championed by EBM, each piece of knowledge can be evaluated on its own terms rather than in accordance to a rigid set of rules which may not be applicable. In this way, we can acknowledge the legitimate challenge posed by EBM—that is, to critically examine the basis of our clinical decisions while at the same remaining open to the diverse means through which knowledge evolves.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.