Clinical questions arise continuously in daily clinical practice; while some of them can be easily answered by checking a textbook or a national formulary, some of them are more complex and require the clinician to look at the research evidence.

Over the past 30 years, the research literature has grown at a rate such that it is not feasible for even the most specialist clinician to keep abreast of all relevant research. Systematic reviews, which aim to synthesise all the high-quality evidence relating to a given question, are therefore frequently the best and most appropriate form of evidence for addressing clinical questions.

Clarifying the key elements of the question is a critical first step towards providing an answer to inform a decision, and for a researcher to frame the research to be done. The PICO (Population, Intervention, Comparator and Outcomes) model[1] captures the key elements and is a good strategy to provide answerable questions.

  • Population: who are the relevant patients or the target audience for the problem being addressed?

     Example: In women with non-tubal infertility

  • Intervention: what intervention is being considered?

    Example: …would intrauterine insemination…

  • Comparator: what is the main comparator to the intervention that you want to assess?

     Example: …when compared with fallopian tube sperm perfusion…

  • Outcomes: what are the consequences of the interventions for the patient? Or what are the main outcomes of interest to the patient or decision maker?

     Example: …lead to higher live birth rates with no increase in multiple pregnancy, miscarriage or ectopic pregnancy rates?

A clear and focused question is more likely to lead to a credible and useful answer, but a poorly formulated question can lead to an uncertain answer and create confusion. The population and intervention should be specific, but bearing in mind that if any or both are described too narrowly, it may be difficult to find relevant studies or sufficient data to demonstrate a reliable answer. For the population, this might refer to people with a medical condition, or at risk of illness, and it may be important to specify stage of disease or clinical context. Interventions might range from a diagnostic or screening test to a therapeutic intervention of any kind. It might be necessary to clarify the intervention and comparator in some detail, including mode of administration, dosage, duration of treatment, or the different elements that make up a complex intervention. The most appropriate comparator might be no treatment or placebo, variations of normal care, or alternative competing interventions. The outcomes should be those judged most important to patients or other decision makers,[2] and surrogate outcomes (such as bone density) should usually not be considered, unless they can be shown to be directly linked to patient important outcomes. For complex questions a logic framework is frequently crucial to clarifying likely pathways of action.

Clarifying a good question will help to determine a credible answer, but another important issue to be addressed is that of the certainty of the answer. To this end, the use of the GRADE[3] approach to assess certainty (or the quality of a body of evidence), is a critical element. This should be considered from the outset in relation to any research question. The first step consists of highlighting those PICO elements, and in particular comparisons and outcomes that are critical for decision making and differentiating these from those that are important but not critical and those that are not important. In addition, for many outcomes it will be useful to pre-determine what constitutes a minimum important difference or effect, in order to support interpretation of the results of the analysis and inform decision making.

A key part of the GRADE approach is related to the nature of the studies that are contributing data to help answer the question. For therapeutic interventions, the randomised controlled trial remains the most reliable means of determining effectiveness in most cases. Accordingly, within the GRADE approach, RCTs are initially considered to provide high-quality evidence and observational studies low-quality evidence. This evidence rating may subsequently be lowered or (for non-randomised studies) raised according to different factors (lower: limitations of design, inconsistency, indirectness, imprecision or publication bias and raise: large effect, dose response, plausible confounding). The final GRADE rating of the evidence for each outcome (from high, where we are very confident that the real effect lies close to the effect estimate, to very low, where we are very uncertain about the estimate) will determine how we use the data obtained in response to our question.

Many of the questions that clinicians come across in clinical practice are addressed in Cochrane Clinical Answers . Those have been created to inform decision-making at the point of care, mimicking the questions that clinicians may face and obtaining answers from Cochrane Reviews, by filtering the data and bringing to the forefront the most clinically relevant aspects of the review. Cochrane Clinical Answers also provide the specific PICO and the outcome data for each of the comparisons reported, and a quality of the evidence (if GRADE has been used) or a summary of the risk of bias assessment for each outcome. Hence while they make the information a clinician will be most interested in more accessible, they also increase the usage of Cochrane Reviews to inform healthcare decisions.

In a world where health data are increasingly available, designing a well-constructed question is a key element to get credible answers, although, as data are increasingly coming from different and multiple sources (from regulatory agency databases and repositories rather than scientific journals, or wearable devices, smartphone apps and social-networking sites), building a clear and focused question may be only the first step in a complex sequence of events that lead to a desired answer. New methods of combining and synthesizing information from different sources will have to be developed in the near future[4] but meanwhile, a good use of the resources at hand may help to guide clinicians’ daily work.

Author: David Tovey

Read more

References

  1. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club 1995; 123: A12–3.
  2. Guyatt G, Montori V, Devereaux PJ, Schunemann H, Bhandari M. Patients at the center: in our practice, and in our use of language. ACP J Club2004;140:A11-2.
  3. Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ; GRADE Working Group. What is “quality of evidence” and why is it important to clinicians? BMJ 2008; 336:995. doi: https://dx.doi.org/10.1136/bmj.39490.551019.BE
  4. Andrews JC, Schünemann HJ, Oxman AD, Pottie K, Meerpohl JJ, Coello PA, Rind D, Montori VM, Brito JP, Norris S, Elbarbary M, Post P, Nasser M, Shukla V, Jaeschke R, Brozek J, Djulbegovic B, Guyatt G. GRADE guidelines: 15. Going from evidence to recommendation-determinants of a recommendation’s direction and strength. J Clin Epidemiol. 2013 Jul;66(7):726-35. doi: 10.1016/j.jclinepi.2013.02.003.