Statistics from Altmetric.com
Paradoxically, the more firmly psychological therapies are established as bona fide interventions within mainstream mental health services, the greater the case for evidence based practice. Patients, often in severe distress, make a major personal commitment to these therapies and have a right to know that they are safe and effective. Although psychological therapies do benefit patients,1 there is also evidence that patients' mental health can deteriorate if therapies are inappropriate or carried out incompetently.2 Finally, psychological therapy represents a substantial economic investment and it could be seen as unethical to waste this money on ineffective treatments.EBMH notebook
Evidence based psychotherapy has therefore been promoted internationally, albeit with different emphases in different health care systems. For example, the American trend, fuelled by third party payment systems in managed care, has been to compile lists of “empirically supported treatments”.3, 4 This provoked disquiet among researchers and practitioners alike.5 The National Health Service (NHS) in England has so far resisted this approach, arguing for a broader model of evidence based practice that includes both efficacy and effectiveness evidence, acknowledges the role of factors common to all therapies, and emphasises quality of service delivery.6
Systematic collation and appraisal of findings is indispensable for informed practice, commissioning or policy making. Quantitative summaries of treatment effects are not new in this field, which indeed was one of the first to apply meta-analytic techniques.7 Mackay and Barkham8 appraised systematic reviews of psychological therapies for common mental health problems in adults, against Oxman and Guyatt's9 8 quality criteria. They found 42 reviews meeting all 8, and a further 43 meeting 6 or 7. Cochrane reviews of psychological therapies are becoming available, for example on treatment of deliberate self harm,10 debriefing in adults exposed to trauma11 and chronic fatigue.12 Others, on psychological treatments for eating disorders,13 depression14 and counselling in primary care15 are in preparation. As more systematic reviews are completed, gaps in evidence and methodological shortcomings are continually identified, and pressure builds for good research to be commissioned to address them. There are other indications that evidence based psychotherapy is an idea whose time has come. Special issues of the journals Clinical Psychology: Science and Practice (1996) and Psychotherapy Research (1998) have appeared on the topic, and of course, Evidence-Based Mental Health has treated psychological and pharmacological interventions identically in its search for best evidence of what works in the mental health field. The first evidence based guideline on treatment choice decisions in psychological therapies is soon to be published.16
So, is all for the best in the best of all possible worlds? Not entirely—there remains considerable debate about and distrust of the principles of applying research evidence to change psychotherapy practice. Some practitioners are profoundly sceptical, and fear that far from improving care, the enterprise potentially does harm.17, 18 Scepticism about the wisdom of changing practice on the basis of research evidence is fuelled by the methodological problems of psychotherapy research. This leads to fears that unreliable or even misleading evidence may be prematurely and simplistically applied. More radically, some believe that psychotherapy, by its very nature, is inimical to the research paradigms of evidence based medicine.
Some problems with trials research are remediable. For example, psychological therapy is more difficult to deliver to a set standard compared with a drug treatment. Some negative results may result from poor quality delivery of the intervention and adherence to a treatment manual does not overcome this problem.19 The very act of randomisation may produce systematic between-group differences when patients have strong preferences for one therapy over another.20 High rates of attrition (“drop out”) within randomised trials sometimes create problems with interpreting the results. There are widespread practical difficulties in achieving textbook randomisation in psychotherapy trials. Despite randomisation, where samples are small there are often unplanned differences between the groups on important variables such as symptom severity. Most comparative outcome trials fail to distinguish between the effectiveness of different types of psychological therapy, for reasons that may be methodological, such as insufficient statistical power.21, 22 Many of the researchers who conduct comparative outcome trials are not in equipoise but on the contrary are enthusiasts for one therapy approach; this may influence outcome to the extent that removing the effect of allegiance may alter the result of meta-analysis.23 Individual therapists vary in effectiveness and, in most trials, such effects are unanalysed or unreported despite implications for interpreting the findings.24
Although these problems are ubiquitous, few of them are unique to psychotherapy trials and, in principle, they can all be addressed by improved research designs, including different designs to address different questions. This would mean adequately powered trials with clinically realistic samples and interventions and clinically meaningful comparison groups.25 We would see both pragmatic trials with health status utility measures and economic evaluation, and explanatory trials making planned comparisons of specific therapy “ingredients”.26, 27 The effect of attrition would invariably be analysed using “intention to treat” samples and the variance due to individual therapists would also be routinely reported. There would be greater awareness of the effects of patient preference, and the use of partially randomised preference trials where needed.28 Outcomes should be reported in terms of clinical as well as statistical significance.29 Equipoise of researchers could become a significant factor in decisions to fund a trial.
Some problems are more fundamental than these though, and often derive from treating psychological therapies as if they are the same as pharmacological treatments. However, the “drug metaphor” breaks down at a number of points. By its nature, therapist and patient cannot be blinded to the intervention being delivered, and there are profound conceptual difficulties with “placebo” treatments in psychological therapy. The intervention can never be entirely specified or standardised, as therapists are responsive to emergent issues, changing what is being delivered throughout the course of treatment.30
Problems with external validity, even in good, clinically realistic trials, are inescapable—there are intrinsic tensions between the internal validity prized by researchers and the external, ecological validity essential to clinicians. Some unavoidable threats to external validity are homogeneity of patient samples, exclusion criteria, standardisation of treatment and randomised allocation. Randomised controlled trials (RCTs) are therefore only one part of a research cycle; most appropriately they test the capabilities of a well developed therapy after single case studies and before large scale field trials.31 Both efficacy and effectiveness research is needed for evidence based practice and other research strategies complement the controlled trial. Two other sources of evidence are large sample surveys of psychotherapy recipients, and research on psychotherapeutic processes.
Practice based research is vital. We lack even elementary information about the effect of treatment setting on the outcome of therapy because such research is difficult to design, and systematic comparisons are rare. Most psychological therapy in the NHS is pragmatic and eclectic, where therapists use a judicious mix of techniques drawn from varying theoretical frameworks. Psychotherapy research, on the other hand, focuses on standardised interventions of “pure” types of therapy (eg, cognitive, behavioural, or psychoanalytic). The most prevalent interventions are paradoxically the least researched. There is sparse evidence about the most effective ways to deliver therapy or which factors influence access to therapy in the real world. The differences between effect sizes obtained in RCTs and those in everyday practice have still not been studied systematically, despite early attempts to investigate this important issue.32
Practice research networks are a promising development, where members agree to gather and pool data relating to clinical outcomes, using the same set of measures, in order to enable analysis of national or regional datasets. Data are aggregated and anonymous and individual patient details are not identifiable, although patient consent to this use of data is still advisable. These networks help practitioners to take a research based approach to exploring outcomes in their own services, sharing the findings, which can be compared with results obtained in more controlled experimental conditions.33 By developing high quality clinical databases on large clinically representative samples, practice research networks can contribute an important source of evidence on the effectiveness of services as delivered. This approach is starting to bear fruit in other fields, such as surgery, obstetrics, haematology and oncology.34
Obtaining good quality, replicated research findings on all therapeutic approaches across all diagnoses in their many complex combinations, is a huge unfulfilled enterprise. Some see the attempt as misguided and not feasible.25 An alternative approach overcomes the problem of having to test every “brand name” therapy, by paying less attention to self styled differences between therapies and instead studying basic psychological processes common to all successful therapy, such as therapeutic alliance, emotional assimilation and phases of change. Returning momentarily to the drug metaphor, psychotherapy outcome research without process research is like undertaking drug efficacy trials without laboratory research on their mode of action.
There is perhaps one sense in which psychological treatments are disadvantaged compared with drug treatments. Like health services research and research on social interventions, the cost of researching psychotherapy is borne almost entirely by public funds. For psychological therapies however, “head to head” comparisons with medication are commonly made in primary research and meta-analysis. No patents are granted on psychological therapies and as they are not commercially exploitable, the research investment in them is trifling, compared with research and development budgets in pharmaceuticals.
So is evidence based psychotherapy a special case or is there too much special pleading? Perhaps predictably, both statements are true to an extent. Evidence based psychotherapy needs some modification of the rules of evidence based medicine, but these adaptations are few, and in my view, consensus could be reached on what they are. The field certainly needs more research and development investment, in under researched therapies, modalities and populations. But claims that psychotherapy, in any variant, is uniquely outside the discourse, are insupportable.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.