Skip to main content

Research Repository

Advanced Search

Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models: a review shows improvements are needed.

Levis, Brooke; Snell, Kym Ie; Damen, Johanna Aa; Hattle, Miriam; Ensor, Joie; Dhiman, Paula; Andaur Navarro, Constanza L; Takwoingi, Yemisi; Whiting, Penny F; Debray, Thomas Pa; Reitsma, Johannes B; Moons, Karel Gm; Collins, Gary S; Riley, Richard D

Authors

Brooke Levis

Kym Ie Snell

Johanna Aa Damen

Miriam Hattle

Joie Ensor

Paula Dhiman

Constanza L Andaur Navarro

Yemisi Takwoingi

Penny F Whiting

Thomas Pa Debray

Johannes B Reitsma

Karel Gm Moons

Gary S Collins

Richard D Riley



Abstract

Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies, and provide recommendations for improvement. We searched PubMed (January 2018-May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs. Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used QUADAS-2, but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, seven (37%) test accuracy IPDMAs and one (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and four (21%) test accuracy IPDMAs and one (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as PROBAST (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD. Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this. [Abstract copyright: Copyright © 2023. Published by Elsevier Inc.]

Journal Article Type Article
Acceptance Date Oct 30, 2023
Online Publication Date Nov 2, 2023
Deposit Date Nov 27, 2023
Journal Journal of clinical epidemiology
Print ISSN 0895-4356
Publisher Elsevier
Peer Reviewed Peer Reviewed
Pages S0895-4356(23)00282-2
DOI https://doi.org/10.1016/j.jclinepi.2023.10.022
Keywords individual participant data meta-analysis, risk of bias, prediction models, test accuracy