Evidence synthesis
Having appraised the search results and decided which studies to include, the next step is to report study results, and if possible combine the results in order to draw conclusions about the clinical question of interest.
When reporting data from any study, only report on parameters (e.g., population, interventions, comparisons, and outcomes) prespecified in the protocol/review plan.
Note: You should avoid reporting on results presented only in abstracts. These do not allow a proper scrutiny of the methodology of a trial, are often sparsely reported, and many do not go on to full publication.
It is important to remember that an absence of evidence is not the same as evidence of absence of effect.[1]
Most people conducting systematic reviews use software for evidence synthesis; some tools are able to combine this with the earlier screening and data extraction stages of the systematic review process. An extensive list of the available guidance and software can be found at SR toolbox, an online catalog which is searchable by task.
Meta-analysis
A meta-analysis provides a weighted average of all the results from each of the studies included. It yields an overall statistic (together with its confidence interval) that summarizes the effects of the experimental intervention compared with a control intervention for a specific clinical outcome. Before employing a meta-analysis, assess whether the studies being included are sufficiently homogeneous (similar both clinically and in study design and methodology) to be combined.
To judge how generalizable the results of any individual studies are (and whether they can be combined), a PICOT system is useful:[2]
- Population included
- Intervention assessed
- Comparison tested
- Outcome involved
- Timeframe measured.
When reporting the results, a variety of test statistics are used (P value, relative risk [RR], odds ratio [OR], hazard ratio [HR], the weighted mean difference [WMD], standardized mean difference [SMD], etc) depending on the data used and the analysis performed. Each test statistic has its own strengths and weaknesses. In light of this, for analysis suggesting statistical benefit with a particular treatment, considering absolute data is an option where appropriate. Of course, any consideration of absolute data is limited by the information supplied in the original study, and where absolute data is not reported, this a potentially important omission that warrants comment.
Understanding statistics with BMJ Learning
Another area for consideration is how to assess the variation across studies (heterogeneity). Frequently, this is done using the I² statistic. I² values can range from 0% to 100%, with 0% indicating that statistical homogeneity exists. It has been suggested that the adjectives low, moderate, and high (heterogeneity) be assigned to I² values of 25%, 50%, and 75%. Significant heterogeneity is typically considered to be present if I² is 50% or more. For a more complete picture the magnitude of I² needs to be interpreted alongside the P value for the chi-squared test or a confidence interval for I².
The methods for meta-analysis depend on the outcomes. For dichotomous outcomes this can be a fixed-effect (Mantel-Haenszel, Peto odds ratio, or inverse variance) or random-effects (DerSimonian and Laird inverse variance) model. Peto odds ratio tends to be used when there is a very low event rate. Random-effects models are used when significant heterogeneity is present. For continuous variables consider if mean difference or standardized mean difference will be used, and whether the analysis will include change scores (change-from-baseline measurements) or just postintervention values. Consider consulting with a statistician when making these decisions.
The results of a meta-analysis are often presented graphically in a forest plot. This allows the reader to visually compare all the studies included in the analysis in one place. The forest plot also visually represents significance or nonsignificance, the precision of the results through the width of the confidence interval, and gives an indication for any heterogeneity (possible differences) across the studies that may need to be explained.
The Cochrane Handbook states that "potential advantages of meta-analyses include an improvement in precision, the ability to answer questions not posed by individual studies, and the opportunity to settle controversies arising from conflicting claims. However, they also have the potential to mislead seriously, particularly if specific study designs, within-study biases, variation across studies, and reporting biases are not carefully considered".[3]
When performing a meta-analysis it is not possible to simply add new studies to an earlier meta-analysis unless all the original raw data is available. This is why the updating of a systematic review is usually undertaken by the same group responsible for the previous one.
Reporting the outcomes that matter
The most useful studies report on clinical outcomes: that is, ones that matter to people, such as mortality, morbidity, number of people improved, etc. Laboratory or proxy outcomes rarely count as outcomes that matter to people.
For example, a review on fracture prevention should usefully report on fractures prevented, not on changes in bone density measured by scans, which may or may not eventually result in fractures.
Reporting on laboratory outcomes may sometimes be appropriate, particularly when reporting on clinical outcomes is scarce and where the laboratory outcomes are commonly used in management or are considered strongly related to prognosis.
Reporting adverse effects
The first step in harms reporting is obviously to report any adverse effects found by included trials, but RCTs are often underpowered to detect harms. Depending on the type of study you are producing, and its inclusion criteria, it may be appropriate to include nonRCT data that provide details on relevant adverse effects. Relevant warnings from bodies such as the FDA and MHRA may also be appropriate for inclusion.
Adverse effects are often under reported, and you may consider it appropriate to consult other sources of evidence, such as observational data, case reports, warnings, and prescription guides, to get a comprehensive view of harms associated with an intervention.
Content created by BMJ Knowledge Centre
References
-
-
- Altman DG, Bland JM. Absence of evidence is not evidence of absence. BMJ. 1995;311:485.
- Brown P, Brunnhuber K, Chalkidou K et al. How to formulate research recommendations. BMJ. 2006;333:804–6.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1602035/ - Deeks JJ, Higgins JPT, Altman DG (editors). Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J, et al (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated February 2022). Cochrane, 2022. Available at www.training.cochrane.org/handbook
-