Statistics from Altmetric.com
Scientific and non-scientific evaluations of mental health treatments. The early randomised clinical trials in psychiatry
The evaluation of treatments in medicine, done according to the rules of the scientific method, did not begin long ago: it is little more than 50 years old, a small period of time with respect to the history of medicine.
Its origin is generally attributed to Sir Austin Bradford Hill, since it is believed that with him the research methodology known as the randomised clinical trial (RCT), now considered the gold standard for the evaluation of the efficacy of treatments, was officially born. The first well documented RCT of medical treatment was organised by the Medical Research Council (MRC) and reported in 1948.1 Nevertheless, similar methodologies, called experiments, have been used before outside medicine by psychologists and have therefore a much longer tradition than the RCT. Moreover, the work of Sir Ronald Fisher, 1926 agricultural research (see below), and what Armitage describes as an RCT with group randomisation dating back to 1662,2 should also be mentioned. Bull provided an extensive account of the historical development of clinical trials.3
Bearing in mind these antecedents, the contribution of Sir Bradford Hill (the prime motivator behind the MRC trials before 1950) to making the methodology for RCTs systematic is still considered to be prominent.
Bradford Hill, professor of biostatistics, was not medically qualified and was not involved in studies in psychiatry until, invited by the MRC to chair and co-ordinate the Clinical Trials Sub-Committee, he was prompted to consider the possibility of using RCT methodology in psychiatric research. Michael Shepherd was the secretary of the MRC Clinical Trials Sub-Committee. It was the mid 1950s and the most famous English psychiatrists were invited by Bradford Hill and George Pickering, Regius Professor of Medicine at Oxford, to collaborate on the organisation of a clinical trial conducted under the aegis of the MRC. The psychiatrists immediately betrayed their scepticism by declaring the supremacy of knowledge acquired by doctors at the bedside and clinical intuition to be above any other methodology.
In psychiatry, the first controlled clinical trial was published in 1955 by David Davies and Michael Shepherd in the Lancet.4 It reported a double blind study that compared reserpine with placebo, done at the Maudsley Hospital in London and involving 54 patients with symptoms of anxiety and depression.
The first large scale RCT appeared some years later, in 1965. This study was a multicentre MRC study on the treatment of depression and compared the effects of electroconvulsive therapy, a tricyclic antidepressant, a monoamine oxidase inhibitor antidepressant, and placebo in hospitalised psychiatric patients.5
A method that is today considered most robust for accepting or rejecting hypotheses about whether a treatment could be considered effective, and therefore useful in clinical practice, was strongly criticised at the time by several representatives of academic psychiatry (it is worth noting that this occurred only 35 years ago!). One such representative actually wrote in a letter published in the BMJ that, “There is no psychiatric illness in which bedside knowledge and long clinical experience pays better dividends and we are never going to learn how to treat depression properly from double blind sampling in an MRC statistician′s office.”6 As reported by Shepherd,7 Bradford Hill replied directly to this view, which is now considered simplistic but, I believe, is regrettably still relatively common among clinicians, as follows: “Unfortunately, as one of the patients in the bed I feel more than a trifle depressed while – partly at my expense – he gains his knowledge and his long clinical experience. I would have hoped that the process of learning might be a little less long if it were supported by the experimental method and attitude of mind....The statistician′s office, needless to say, merely provides an experimental design upon which to hang the skilled clinical observation that must characterise any form of inquiry into therapeutic efficacy. There is no question of replacing valuable clinical observations by a series of mathematical symbols. Those who think so have the myopia of Don Quixote: they mistake the scaffold for the house.”8
This was not the first time that a clinician, fired by what some may consider to be excessive therapeutic enthusiasm, substituted personal experience for critical evaluation and proven efficacy. We can consider, for example, the case of Manfred Sakel. He described his role as the founder of insulin coma therapy and considered it the first effective drug treatment of schizophrenia, claiming a 70% full remission rate and a high proportion of undefined“social remissions”in the remainder of cases.7 Such“therapy”continued to be considered effective for many years, at least until 1953, when a young psychiatrist, Harold Bourne published a study in the Lancet entitled,“The insulin myth”, in which, by critically analysing the literature, he showed how unsatisfactory the evidence really was for the supposed effectiveness of insulin.9
For several weeks afterwards letters of indignation were published in the correspondence columns of the Lancet by prominent psychiatrists of that time. They considered the challenge to established practice an effrontery and accused Bourne of clinical inexperience, of using selective quotations from the literature, and of an iconoclastic temperament, allied with youthful intemperance. Among the signatories of these letters, as reported by Shepherd, were several of the best known figures in British psychiatry.10
Another example of non-scientific methods of treatment in psychiatry is the unethical and tragically noxious programme of psychic driving and depatterning, developed by Professor D Ewen Cameron, Director of the Allan Memorial Institute of Psychiatry in Montreal, with money obtained from the American Central Intelligence Agency.10 This sad story has been documented in some detail because of legal action taken against the CIA by relatives of many unfortunate patients.11 Incidentally, Cameron was one of the strongest supporters of insulin coma therapy. He, according to Shepherd,10 “represented par excellence the spirit of furor therapeuticus, clearly enunciated also by other eminent British psychiatrists, such as William Sargant and Eliot Slater”.
The slow development, within medicine, of the“scientific method”and the evolution of research designs for evaluating medical treatments
In reality, even if the scientific evaluation of treatments and of clinical procedures is recent history, the techniques characteristic of the scientific method and their application in medicine have a much longer history. It is worthwhile recalling several illustrations.
Sir Francis Bacon (1561–1626), according to whom “the source of every error is in the impurity of the mind, since Nature itself could not lie”, suggested that the 3 fundamental stages of the scientific method are (1) observation, (2) classification, and (3) deduction. He determined that observation (which sometimes follows experimentation) must be supplemented where necessary by experiment and guided by hypothesis, which would lead to classification and the formulation of general laws and followed eventually by deduction (ie, the prediction that new facts would conform to general laws that have already been identified).
The use of statistics, which is an essential part of the scientific method, is quite old. Again, statistics began outside medicine, in an agricultural station in England and brewery in Ireland. Sir Ronald Fisher, in fact, used terms in analysis of variance that reflect the fact he was working in agriculture (eg, split plot). Literally, he split a plot of land in half to test different fertilisers. Moreover, William Gossett was a statistician at the Guinness Brewery. Company policy prevented him from publishing, so he used the pseudonym Student (of t-test fame).
In 1721, Cotton Mather in Boston and Jurin in London grasped the importance of statistics while attempting to show the efficacy of vaccination against smallpox. The first use of statistics in therapeutic research, however, is attributed to William Cobbett who, while calculating the mortality rate in Philadelphia in 1800, showed that the treatment of yellow fever by bloodletting and purgatives was not only useless but very often dangerous. Thirty years later Pierre Charles Alexander Louis originated clinical biometry by publishing his studies in Paris on the effects of bloodletting in pneumonia, erysipelas, and other infections. In these studies, he linked the collection of clinical and laboratory findings with follow up data, thereby underlining the necessity of following treated and untreated patients over time to prove the efficacy of treatments. He defined this procedure as the numerical method and thereby opened the way for the introduction of statistical concepts such as probability and the normal curve (described by Quetelet in 1846), the correlation coefficients (described by Galton in 1869), the chi squared test (described by Pearson in 1900), and the student t test (described by Gosset in 1908) (see Hordern).12
It is believed that the first application of the random allocation of patients to experimental and control groups occurred in 1747 in Edinburgh by James Lind. However, David Streiner (personal communication) says that the first RCT was not by Lind, but by Daniel (Book of Daniel, chapter 1, verses 11–20). It refers to a comparison of the effects of two 10 day diets, given to 2 groups of people (Jews and Babylonians). The assignment was not at random, but it was still a well designed study.
James Lind, a naval surgeon, showed by means of an elegant clinical experiment that the consumption of oranges and lemons prevented the development of scurvy, an illness responsible for the death of many English sailors between 1600 and 1800. In reality, before Lind′s study (done in 6 homogenous groups of 12 patients, all nursed in the same environment on a diet that was identical, except for their medical treatment, given for 6 days: citrus fruits - 2 oranges and 1 lemon per day, cider, elixir of vitriol, vinegar, seawater, or a purgative electuary), the use of citrus fruit juice for treating scurvy had already been suggested in at least 80 other texts. In any case it was only in 1795, the year after Lind′s death, that 2 of his students, Blane and Trotter, managed to convince the British Navy to introduce lemon juice as an obligatory part of the English sailor′s diet. England and Spain were at war in 1796 so it was not until 1810, when large quantities of citrus fruit from Malta and Sicily became available, that scurvy disappeared amongst sailors. Lind had to wait 6 years before he could publish his studies, and another 42 years passed before the results of his work were translated into widely practised curative and preventative measures. A good example, according to Hordern,12 of the difficulty of translating the results of scientific research into practice!
The use of blind evaluation (antecedent to the double blind methodology) began in the middle of the 1800s, when it was used by the Medical Society, by the Austrian Homeopathic Doctor′s Association in Vienna, and by Loeftler in Berlin. Finally, the use of placebo in therapeutic practice and in research began, in 1954–5, with J H Gaddum and H K Beecher, professor of anaesthesia at Harvard. Beecher intuited the utility of inert pharmacological substances after having noted, by chance, the effectiveness of distilled water administered instead of morphine in the treatment of pain and shock when morphine was unavailable. These observations were obtained during World War II on a Pacific island where, as a medical officer, he was required to treat soldiers who had been injured at the front.
It bears repeating that, even if singular aspects or fragments of the methodology that characterises today′s controlled clinical studies are identifiable in some of the experiences and practices of the last 3 or 4 centuries, the standardisation and codification of such a methodology and therefore the introduction in medicine of scientific evaluation of treatment outcomes is much more recent than commonly believed.
In conclusion, we need to learn from experience. Sometimes it is useful, for better tackling present problems in the evaluation of mental health treatments and for implementing the results of these evaluations in everyday clinical practice,13 that this learning takes also an historical perspective.
↵* Based, in part, on the foreword by M Tansella “Valutare l'esito per migliorare la qualità dell'assistenza e l'utilizzazione delle risorse”, to the book: Ruggeri M, Dall'Agnola, R. Come valutare l'esito nei dipartimenti di salute mentale. Roma: Il Pensiero Scientifico Editore, 2000.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.