TY - UNPB
T1 - False Promises in Medical Imaging AI?
T2 - Assessing Validity of Outperformance Claims
AU - Christodoulou, Evangelia
AU - Reinke, Annika
AU - Andrè, Pascaline
AU - Godau, Patrick
AU - Kalinowski, Piotr
AU - Houhou, Rola
AU - Erkan, Selen
AU - Sudre, Carole H.
AU - Burgos, Ninon
AU - Boutaj, Sofiène
AU - Loizillon, Sophie
AU - Solal, Maëlys
AU - Cheplygina, Veronika
AU - Heitz, Charles
AU - Kozubek, Michal
AU - Antonelli, Michela
AU - Rieke, Nicola
AU - Gilson, Antoine
AU - Mayer, Leon D.
AU - Tizabi, Minu D.
AU - Cardoso, M. Jorge
AU - Simpson, Amber
AU - Kopp-Schneider, Annette
AU - Varoquaux, Gaël
AU - Colliot, Olivier
AU - Maier-Hein, Lena
PY - 2025/5/7
Y1 - 2025/5/7
N2 - Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.
AB - Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.
KW - cs.CV
U2 - 10.48550/arXiv.2505.04720
DO - 10.48550/arXiv.2505.04720
M3 - Preprint
BT - False Promises in Medical Imaging AI?
ER -