Do overly complex reporting guidelines remove the focus from good clinical trials?
BMJ 2021; 374 doi: https://doi.org/10.1136/bmj.n1793 (Published 16 August 2021) Cite this as: BMJ 2021;374:n1793- Jeremy Howick, director, Oxford Empathy Programme1,
- Rebecca Webster, lecturer in psychology2,
- J André Knottnerus, emeritus professor of general practice3,
- David Moher, director and professor45
- 1Faculty of Philosophy, University of Oxford, UK
- 2Department of Psychology, University of Sheffield, UK
- 3Maastricht University, Netherlands
- 4Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute University of Ottawa, Canada
- 5School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Canada
- Correspondence to: J A Knottnerus andre.knottnerus{at}maastrichtuniversity.nl, D Moher dmoher{at}ohri.ca
Yes—Jeremy Howick, Rebecca Webster, and J André Knottnerus
In 1996 a group of medical journal editors, clinical trialists, epidemiologists, and methodologists met in Chicago to develop a checklist to help researchers report the results of their clinical trials completely and transparently.1 The result was the Consolidated Standards of Reporting Trials (CONSORT), which has aimed to improve reporting of randomised controlled trials (RCTs) ever since.
The original 1996 statement included a half page guide embedded in a three page explanatory document.2 The updated 2010 CONSORT statement includes 25 items embedded in a 28 page paper, as well as a separate explanatory document. That’s just the basic version. There are versions of CONSORT for trials of herbal treatments, orthodontic treatments, feasibility, and at least 25 other subtypes.3 And that’s just CONSORT; many other reporting guidelines have been developed since 1996.
Multiple guidelines are often required for a single study, each with a host of supplementary documents. Identifying, understanding, and implementing these guidelines takes time and effort. This would be justified if these efforts all improved study quality—or better still, patient outcomes. Such evidence is absent.
The fact is that guideline developers measure their success by checking to see how well researchers adhere to the guidance.4 Such efforts often reveal that the guidelines are barely used.5 Worse, checking whether guidelines are used doesn’t tell us whether the guidelines (in their current form) are useful for improving research in the first place.
Perverse incentives
In addition, since successful (or successfully followed) guidelines lead to …
Log in
Log in using your username and password
Log in through your institution
Subscribe from £184 *
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for:
£50 / $60/ €56 (excludes VAT)
You can download a PDF version for your personal record.