Opening paragraphs:
A randomised controlled trial (RCT) is a type of clinical trial that has long been the gold standard for establishing the efficacy and safety of health interventions. The prospective design of RCTs, combined with random allocation of patients to a real-world intervention, minimises bias and allows for claims of causality. However, a trend is emerging, particularly within the growing field of digital health and artificial intelligence (AI)—the application of the RCT or clinical trial label to studies that fundamentally do not meet its core definition criteria. The Lancet Digital Health, like many journals, has observed an influx of manuscripts that describe retrospective data analysis or simulation-based studies yet use the RCT classification. Authors increasingly cite papers from high-impact journals, including those describing AI for cardiac ultrasound and large language models (LLMs) for physicians’ performance on diagnosis and patient care tasks, to justify this semantic drift. Although these cited studies are methodologically sound, comparing different simulations or algorithms under controlled, randomised conditions blurs a crucial line defined by established clinical trial guidelines.
The International Committee of Medical Journal Editors (ICMJE) provides a clear and essential definition: a clinical trial is a research project that prospectively assigns people or a group of people to an intervention to study the relationship between a health-related intervention and a health outcome. This emphasis on prospective assignment to a health intervention on people is non-negotiable for a true clinical trial. A retrospective analysis of existing data, or a study that randomises participants to different AI simulations without prospectively assigning a real-world, patient-facing therapeutic intervention, is fundamentally different. The study can be important and innovative, but by definition, it is not a clinical trial.