A key issue in critically appraising randomized controlled trials (RCTs) is that, while it can be thought of in terms of being an academic exercise, the reason why we are doing it is to evaluate what weight we can put on the findings, and how far we can generalize the results from trials into routine practice to inform clinical care. These pragmatic issues always need to be borne in mind when critically appraising study data.

There are numerous checklists available, none of which is perfect. Individual issues may arise that are relevant to one study only. Alternatively, certain disease states may raise methodological issues that are specific to studies in that field. Hence, no checklist can be entirely comprehensive and can only be regarded as a broad framework to apply.

The SR toolbox is an online catalog providing summaries and links to the available guidance and software for each stage of the systematic review process including critical appraisal. Examples for randomized controlled trials include:

  • Revised Cochrane risk-of-bias tool for randomized trials (RoB 2)
  • JADAD scale
  • CASP Randomised Controlled Trial Checklist

At the most general level, three possible scenarios arise when critically appraising the quality of an RCT:

  • Methodology sound: include
  • Methodology suboptimal: include, with appropriate caveats as to how it affects interpretation of the results
  • Methodology unsound/fatal flaw: exclude.

The first assessment would be whether a study attained the explicit minimum quality criteria (that is, in terms of the minimum acceptable size, level of blinding [if blinding is possible], length of follow-up, etc).

The following framework forms an initial memory aid for assessing RCTs, but other issues will arise on a study by study basis.

Content created by BMJ Knowledge Centre

Read more