In most, if not all, healthcare conditions, there is a plethora of competing interventions with few head-to-head comparisons and classical meta-analysis fails to handle simultaneously the multiple comparisons among interventions. Network meta-analysis is an extension of pairwise meta-analysis to accommodate multiple interventions and comparisons.
Statistics from Altmetric.com
What it is
Consider the simple example in which we have some trials comparing A versus B and some trials comparing A versus C, where A is the control treatment and B, C are two active treatments. Of course, it is interesting to explore whether B and C are better than A but interest also lies in the relative efficacy between B and C and there are no trials directly comparing these two interventions (providing direct evidence). However, both are compared with A and one can go from B to C indirectly via A forming thus a connected network. Hence, along with the direct effects of and , one can estimate an indirect effect .
Suppose now that A, B and C are not interventions but people and the outcome is height. It is clear that if B is 5 cm taller than A and C is 8 cm taller than A, then C is 3 cm taller than B. This is how indirect comparisons also work with interventions, through the formula
Consider now the case in which there are studies directly comparing all possible pairs of interventions. Hence, we can estimate , directly but also indirectly through Equation 1. In this case, network meta-analysis (NMA) combines direct and indirect evidence to estimate the relative efficacy for each pair of interventions. NMA is easily employed in both a Bayesian and a frequentist framework in most statistical software, including R, Stata 14, R 3.6.1 and WinBUGS 1.4.
Strengths of NMA
By combining not only direct but also indirect evidence, NMA results (most of the times) in estimates with increased precision.
Provides estimates for the relative efficacy of interventions that have never been compared directly.
Ranks interventions for each outcome considered.
A key feature of NMA is the contribution matrix that shows how much each study contributed to the NMA results. It shows how information flows in the network and combined with study characteristics such as risk of bias, we can evaluate now much confidence we should place to each NMA estimate.1
Assumptions of NMA
Like any statistical model, NMA makes assumptions and validity of its results depends on the plausibility of the assumptions made.
The key assumption is that of transitivity, stating that one can learn about B versus C if both are connected through other interventions in the network.
We approximate transitivity statistically by comparing direct and indirect evidence (consistency assumption).
What transitivity entails
The distribution of a-priori chosen effect modifiers is similar across treatment comparisons. Suppose that baseline risk is an effect modifier and A versus B are compared in low-risk populations but A versus C are compared in high-risk populations. In this case, baseline risk confounds the relationship between B and C and we do not know with certainty if any difference is due to the different intervention employed or baseline risk. It is typical to compare publication year across treatment comparisons as it is a proxy for quality of trials, risk of bias, publication bias problems and so on.
Treatments should be similar when they appear in different comparisons. Suppose A is a placebo pharmacological intervention in A versus B trials but a placebo psychological intervention in A versus C trials. The two placebos are potentially different and the node is split making the network not connected.
Participants, could have been, in principle, randomised to any of the available interventions. Suppose that A is given as a first-line or a second-line treatment, whereas B is a first-line treatment and C is a second-line treatment. In this case, those randomised to the A versus B trials could not have been randomised to the A versus C trials.
Clinicians interested in conducting an NMA should
Include a statistician knowledgeable of the methodology.
Understand the transitivity assumption in order to appraise it. Statistically, direct and indirect evidence can be in agreement (eg, due to lack of power) but the transitivity can be violated and the statistician will not necessarily spot that.
Not be overly enthusiastic with ranking. Though appealing it may be, we should always appraise it along the actual relative effects.
We acknowledge the contribution of Myrsini Giannatsi, Andrea Cipriani and Georgia Salanti in writing the corresponding paper for the ‘statistics in practice’ series.
Contributors DM is the sole contributor and author of this paper.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Patient consent for publication Not required.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.