PT - JOURNAL ARTICLE
AU - Mavridis, Dimitris
AU - Chaimani, Anna
AU - Efthimiou, Orestis
AU - Leucht, Stefan
AU - Salanti, Georgia
TI - Addressing missing outcome data in meta-analysis
AID - 10.1136/eb-2014-101900
DP - 2014 Aug 01
TA - Evidence Based Mental Health
PG - 85--89
VI - 17
IP - 3
4099 - http://ebmh.bmj.com/content/17/3/85.short
4100 - http://ebmh.bmj.com/content/17/3/85.full
SO - Evid Based Ment Health2014 Aug 01; 17
AB - Objective Missing outcome data are a common problem in clinical trials and systematic reviews, as it compromises inferences by reducing precision and potentially biasing the results. Systematic reviewers often assume that the missing outcome problem has been resolved at the trial level. However, in many clinical trials a complete case analysis or suboptimal imputation techniques are employed and the problem is accumulated in a quantitative synthesis of trials via meta-analysis. The risk of bias due to missing data depends on the missingness mechanism. Most statistical analyses assume missing data to be missing at random, which is an unverifiable assumption. The aim of this paper is to present methods used to account for missing outcome data in a systematic review and meta-analysis. Methods The following methods to handle missing outcome data are presented: (1) complete cases analysis, (2) imputation methods from observed data, (3) best/worst case scenarios, (4) uncertainty interval for the summary estimate and (5) a statistical model that makes assumption about how treatment effects in missing data are connected to those in observed data. Examples are used to illustrate all the methods presented. Results Different methods yield different results. A complete case analysis leads to imprecise and potentially biased results. The best-case/worst-case scenarios give unrealistic estimates, while the uncertainty interval produces very conservative results. Imputation methods that replace missing data with values from the observed data do not properly account for the uncertainty introduced by the unobserved data and tend to underestimate SEs. Employing a statistical model that links treatment effects in missing and observed data, unlike the other methods, reduces the weight assigned to studies with large missing rates. Conclusions Unlike clinical trials, in systematic reviews and meta-analyses we cannot adapt pre-emptive methods to account for missing outcome data. There are statistical techniques implemented in commercial software (eg, STATA) that quantify the departure from the missing at random assumption and adjust results appropriately. A sensitivity analysis with increasingly stringent assumptions on how parameters in the unobserved and observed data are related is a sensible way to evaluate robustness of results.