Clinical questions arise continuously in daily clinical practice. It is important that these questions are answered in an evidence-based manner. Clarifying the key elements of the question is a critical first step toward providing an answer to inform a decision, or for a researcher to frame the research to be done.

The PICO (Population, Intervention, Comparator, and Outcomes) model captures the key elements and is a good strategy to provide answerable questions.[1]

  • Population: who are the relevant patients or the target audience for the problem being addressed?

     Example: In women with nontubal infertility

  • Intervention: what intervention is being considered?

    Example: …would intrauterine insemination…

  • Comparator: what is the main comparator to the intervention that you want to assess?

     Example: …when compared with fallopian tube sperm perfusion…

  • Outcomes: what are the consequences of the interventions for the patient? Or what are the main outcomes of interest to the patient or decision maker?

     Example: …lead to higher live birth rates with no increase in multiple pregnancy, miscarriage, or ectopic pregnancy rates?

A clear and focused question is more likely to lead to a credible and useful answer, but a poorly formulated question can lead to an uncertain answer and create confusion. The population and intervention should be specific, but bearing in mind that if any or both are described too narrowly, it may be difficult to find relevant studies or sufficient data to demonstrate a reliable answer. For the population, this might refer to people with a medical condition, or at risk of illness, and it may be important to specify stage of disease or clinical context. Interventions might range from a diagnostic or screening test to a therapeutic intervention of any kind. It might be necessary to clarify the intervention and comparator in some detail, including mode of administration, dosage, duration of treatment, or the different elements that make up a complex intervention. The most appropriate comparator might be no treatment or placebo, variations of normal care, or alternative competing interventions. The outcomes should be those judged most important to patients or other decision makers, and surrogate outcomes (such as bone density) should usually not be considered, unless they can be shown to be directly linked to patient important outcomes.[2] For complex questions a logic framework is frequently crucial to clarifying likely pathways of action.[3][4]

Clarifying a good question will help to determine a credible answer, but another important issue to be addressed is that of the certainty of the answer. To that end, the use of the GRADE approach to assess certainty (or the quality of a body of evidence) is a critical element.[5] This should be considered from the outset in relation to any research question. The first step consists of highlighting those PICO elements, and in particular comparisons and outcomes that are critical for decision making and differentiating these from those that are important but not critical and those that are not important. In addition, for many outcomes it will be useful to predetermine what constitutes a minimum important difference or effect, in order to support interpretation of the results of the analysis and inform decision making.

A key part of the GRADE approach is related to the nature of the studies that are contributing data to help answer the question. For therapeutic interventions, the randomized controlled trial remains the most reliable means of determining effectiveness in most cases. Accordingly, within the GRADE approach, RCTs are initially considered to provide high-quality evidence and observational studies low-quality evidence. This evidence rating may subsequently be lowered or (for nonrandomized studies) raised according to different factors (lower: limitations of design, inconsistency, indirectness, imprecision, or publication bias; and raise: large effect, dose response, plausible confounding). The final GRADE rating of the evidence for each outcome (from high, where we are very confident that the real effect lies close to the effect estimate, to very low, where we are very uncertain about the estimate) will determine how we use the data obtained in response to our question.

In a world where health data are increasingly available, designing a well-constructed question is a key element to get credible answers, although, as data are increasingly coming from different and multiple sources (from regulatory agency databases and repositories rather than scientific journals, or wearable devices, smartphone apps, and social-networking sites), building a clear and focused question may be only the first step in a complex sequence of events that lead to a desired answer. New methods of combining and synthesizing information from different sources will have to be developed in the near future but meanwhile, a good use of the resources at hand may help to guide clinicians’ daily work.[6]

Author: David Tovey [updated May 2022 by Caroline Blaine]

Read more

References

  1. Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club 1995; 123: A12–3.
  2. Guyatt G, Montori V, Devereaux PJ, Schunemann H, Bhandari M. Patients at the center: in our practice, and in our use of language. ACP J Club2004;140:A11-2.
  3. Booth A, Noyes J, Flemming K, Moore G, Tuncalp O, Shakibazadeh E. Formulating questions to explore complex interventions within qualitative evidence synthesis. BMJ Glob Health. 2019;4:e001107.
  4. Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, Boyd KA, Craig N, French DP, McIntosh E, Petticrew M, Rycroft-Malone J, White M, Moore L. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ 2021;374:n2061.
  5. Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ; GRADE Working Group. What is “quality of evidence” and why is it important to clinicians? BMJ 2008; 336:995. doi: https://dx.doi.org/10.1136/bmj.39490.551019.BE
  6. Andrews JC, Schünemann HJ, Oxman AD, Pottie K, Meerpohl JJ, Coello PA, Rind D, Montori VM, Brito JP, Norris S, Elbarbary M, Post P, Nasser M, Shukla V, Jaeschke R, Brozek J, Djulbegovic B, Guyatt G. GRADE guidelines: 15. Going from evidence to recommendation-determinants of a recommendation's direction and strength. J Clin Epidemiol. 2013 Jul;66(7):726-35. doi: 10.1016/j.jclinepi.2013.02.003.