BMJ Best Practice and the future of clinical decision support: report of a roundtable at BMJ Future Health

The following brief paper is the report of a recent roundtable discussion held at BMJ Future Health in London. The discussion explored how traditional clinical decision support, represented by BMJ Best Practice, compares with and complements the rapid rise of generative AI in healthcare. Chaired by Kieran Walsh, Clinical Director at BMJ, the discussion brought together three clinicians with frontline experience across hospital medicine, digital health, and decision support.

The conversation began with reflections on BMJ Best Practice as a trusted, established clinical decision support tool.

Panellists described long-standing use, starting in medical school and continuing into daily clinical work. Its strengths were consistency, depth, and reliability. Doctors highlighted its role in refreshing knowledge during busy shifts, supporting safe investigation and treatment decisions, and enabling learning outside the workplace. Importantly, the resource was seen as actionable. It does not just describe conditions, but sets out clear management pathways, guidance on procedures, and information for patients. This makes it useful not only for personal practice but also for supervising and teaching colleagues.

BMJ Best Practice also helps with the challenge of managing patients with multiple conditions.

This was a recurring theme in the discussion. The Comorbidities Manager was highlighted as particularly valuable in modern clinical practice, where patients rarely present to hospital with a single diagnosis. By allowing clinicians to combine conditions such as chronic kidney disease, diabetes, and chronic obstructive pulmonary disease, the tool helps adjust management plans and avoid common errors in management. Beyond decision support, the resource also supports learning by showing how treatments for one condition can destabilise another, reinforcing holistic care.

Time pressure featured prominently.

Clinicians described needing information within seconds in emergencies and minutes during ward rounds. BMJ Best Practice was valued for fast access, offline functionality via the app, and predictable performance regardless of electronic health record system being used. Integration with electronic health records was discussed as desirable, particularly to reduce clicks and cognitive load, but the panel recognised that different systems and partial interoperability still limit seamless use. Until that improves, a stable, independent tool remains important.

The discussion then shifted to generative AI.

All panellists used generative AI tools extensively in everyday life, particularly for administrative tasks, planning, and organising thoughts. In clinical settings, use was more cautious. AI was described as a “thinking partner” rather than a source of answers. Clinicians emphasised that outputs must always be checked against trusted guidelines. Examples of useful clinical applications included exploring rare drug interactions, brainstorming complex clinical case scenarios, and supporting medical scribes. But the panel recognised that usage was sometimes informal due to uncertainty and lack of institutional endorsement.

Trust emerged as the central issue.

Unlike traditional decision support, generative AI often lacks transparency. Clinicians cannot easily trace recommendations back to primary evidence, and references may be unreliable. This undermines confidence, particularly when compared with resources that clearly link guidance to published research. Trust also extended to the clinician-patient relationship. Panellists noted that patients may be uneasy about AI involvement, worrying about bias or depersonalisation. Clear communication was seen as essential. AI must be framed as a tool that frees time for care, not something that replaces clinical judgement or human connection.

Security and governance were major concerns.

The panel agreed that patient-identifiable data should not be entered into general-purpose AI tools. There was unease about clinicians assuming paid versions of AI are secure without fully understanding data handling or regulatory safeguards. Until clear legal frameworks, standards, and accountability are in place, widespread and deep clinical use remains risky. 

Bias was discussed from multiple angles.

AI can reproduce biases present in training data, but human clinicians also carry bias. Examples from AI in dermatology showed that removing contextual patient data sometimes improved accuracy, highlighting that bias can enter systems in unexpected ways. The group agreed that bias cannot be eliminated but must be recognised, monitored, and mitigated.

The final theme was sustainability.

Training and running AI systems consume significant energy. While healthcare uses may justify this cost, there was concern about indiscriminate use and lack of awareness. The panel suggested that constraints and thoughtful use may encourage better questions and reduce waste.

In closing, the discussion positioned BMJ Best Practice as a trusted foundation for clinical care, education, and quality improvement, while generative AI was seen as still being a resource in its early days. The future may lie not in choosing one over the other, but in clear governance and education that prepares clinicians to use both. And to use them safely and wisely.

The authors

Dr Amir Fard, Locum Doctor, Barts Health NHS Trust

Dr Chloe Jacklin, Medical Registrar, North Middlesex University Hospitals NHS Trust

Dr Alexis Nelson, Senior House Officer, The Hillingdon Hospitals NHS Foundation Trust

Dr Kieran Walsh*, Clinical Director at BMJ Group

*Competing interests: KW works for BMJ Group

About BMJ Best Practice

BMJ Best Practice is freely available in the NHS. 

If you would like to make it available in your institution, please contact us at sales@bmj.com.