Skip to main content

Research Repository

Advanced Search

Completeness of reporting of clinical prediction models developed using supervised machine learning: a systematic review

Navarro, Constanza L. Andaur; Nijman, Steven W. J.; Dhiman, Paula; Ma, Jie; Collins, Gary S.; Takada, Toshihiko; Andaur Navarro, CL; Damen, Johanna A. A.; Bajpai, Ram; Riley, Richard D.; Moons, Karel G. M.; Hooft, Lotty

Completeness of reporting of clinical prediction models developed using supervised machine learning: a systematic review Thumbnail


Constanza L. Andaur Navarro

Steven W. J. Nijman

Paula Dhiman

Jie Ma

Gary S. Collins

Toshihiko Takada

CL Andaur Navarro

Johanna A. A. Damen

Richard D. Riley

Karel G. M. Moons

Lotty Hooft


BACKGROUND: While many studies have consistently found incomplete reporting of regression-based prediction model studies, evidence is lacking for machine learning-based prediction model studies. We aim to systematically review the adherence of Machine Learning (ML)-based prediction model studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Statement. METHODS: We included articles reporting on development or external validation of a multivariable prediction model (either diagnostic or prognostic) developed using supervised ML for individualized predictions across all medical fields. We searched PubMed from 1 January 2018 to 31 December 2019. Data extraction was performed using the 22-item checklist for reporting of prediction model studies ( ). We measured the overall adherence per article and per TRIPOD item. RESULTS: Our search identified 24,814 articles, of which 152 articles were included: 94 (61.8%) prognostic and 58 (38.2%) diagnostic prediction model studies. Overall, articles adhered to a median of 38.7% (IQR 31.0-46.4%) of TRIPOD items. No article fully adhered to complete reporting of the abstract and very few reported the flow of participants (3.9%, 95% CI 1.8 to 8.3), appropriate title (4.6%, 95% CI 2.2 to 9.2), blinding of predictors (4.6%, 95% CI 2.2 to 9.2), model specification (5.2%, 95% CI 2.4 to 10.8), and model's predictive performance (5.9%, 95% CI 3.1 to 10.9). There was often complete reporting of source of data (98.0%, 95% CI 94.4 to 99.3) and interpretation of the results (94.7%, 95% CI 90.0 to 97.3). CONCLUSION: Similar to prediction model studies developed using conventional regression-based techniques, the completeness of reporting is poor. Essential information to decide to use the model (i.e. model specification and its performance) is rarely reported. However, some items and sub-items of TRIPOD might be less suitable for ML-based prediction model studies and thus, TRIPOD requires extensions. Overall, there is an urgent need to improve the reporting quality and usability of research to avoid research waste. SYSTEMATIC REVIEW REGISTRATION: PROSPERO, CRD42019161764.

Journal Article Type Article
Acceptance Date Nov 15, 2022
Online Publication Date Jan 13, 2022
Publication Date Jan 13, 2022
Publicly Available Date May 30, 2023
Electronic ISSN 1471-2288
Publisher Springer Verlag
Peer Reviewed Peer Reviewed
Volume 22
Issue 1
Article Number ARTN 12
Keywords Prediction model, Diagnosis, Prognosis, Development, Validation, Reporting Adherence, Reporting guideline, TRIPOD
Publisher URL