Article Text

Download PDFPDF

Addressing missing outcome data in meta-analysis
  1. Dimitris Mavridis1,2,
  2. Anna Chaimani1,
  3. Orestis Efthimiou1,
  4. Stefan Leucht3,4,
  5. Georgia Salanti1
  1. 1Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece
  2. 2Department of Primary Education, University of Ioannina, Ioannina, Greece
  3. 3Department of Psychiatry and Psychotherapy, Technische Universität München, München, Germany
  4. 4Department of Psychiatry, University of Oxford, Oxford, UK
  1. Correspondence to Dimitris Mavridis, dimi.mavridis{at}googlemail.com

Abstract

Objective Missing outcome data are a common problem in clinical trials and systematic reviews, as it compromises inferences by reducing precision and potentially biasing the results. Systematic reviewers often assume that the missing outcome problem has been resolved at the trial level. However, in many clinical trials a complete case analysis or suboptimal imputation techniques are employed and the problem is accumulated in a quantitative synthesis of trials via meta-analysis. The risk of bias due to missing data depends on the missingness mechanism. Most statistical analyses assume missing data to be missing at random, which is an unverifiable assumption. The aim of this paper is to present methods used to account for missing outcome data in a systematic review and meta-analysis.

Methods The following methods to handle missing outcome data are presented: (1) complete cases analysis, (2) imputation methods from observed data, (3) best/worst case scenarios, (4) uncertainty interval for the summary estimate and (5) a statistical model that makes assumption about how treatment effects in missing data are connected to those in observed data. Examples are used to illustrate all the methods presented.

Results Different methods yield different results. A complete case analysis leads to imprecise and potentially biased results. The best-case/worst-case scenarios give unrealistic estimates, while the uncertainty interval produces very conservative results. Imputation methods that replace missing data with values from the observed data do not properly account for the uncertainty introduced by the unobserved data and tend to underestimate SEs. Employing a statistical model that links treatment effects in missing and observed data, unlike the other methods, reduces the weight assigned to studies with large missing rates.

Conclusions Unlike clinical trials, in systematic reviews and meta-analyses we cannot adapt pre-emptive methods to account for missing outcome data. There are statistical techniques implemented in commercial software (eg, STATA) that quantify the departure from the missing at random assumption and adjust results appropriately. A sensitivity analysis with increasingly stringent assumptions on how parameters in the unobserved and observed data are related is a sensible way to evaluate robustness of results.

View Full Text

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.