Abstract
International benchmarking competitions have become
fundamental for the comparative performance assessment
of image analysis methods. However, little attention has
been given to investigating what can be learnt from these
competitions. Do they really generate scientific progress?
What are common and successful participation strategies?
What makes a solution superior to a competing method?
To address this gap in the literature, we performed a multi-
center study with all 80 competitions that were conducted in
the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical
analyses performed based on comprehensive descriptions of
the submitted algorithms linked to their rank as well as the
underlying participation strategies revealed common char-
acteristics of winning solutions. These typically include
the use of multi-task learning (63%) and/or multi-stage
pipelines (61%), and a focus on augmentation (100%), im-
age preprocessing (97%), data curation (79%), and post-
processing (66%). The “typical” lead of a winning team
is a computer scientist with a doctoral degree, five years of
experience in biomedical image analysis, and four years of
experience in deep learning. Two core general development
strategies stood out for highly-ranked teams: the reflection
of the metrics in the method design and the focus on analyz-
ing and handling failure cases. According to the organizers,
43% of the winning algorithms exceeded the state of the art
but only 11% completely solved the respective domain prob-
lem. The insights of our study could help researchers (1)
improve algorithm development strategies when approach-
ing new problems, and (2) focus on open research questions
revealed by this work.
fundamental for the comparative performance assessment
of image analysis methods. However, little attention has
been given to investigating what can be learnt from these
competitions. Do they really generate scientific progress?
What are common and successful participation strategies?
What makes a solution superior to a competing method?
To address this gap in the literature, we performed a multi-
center study with all 80 competitions that were conducted in
the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical
analyses performed based on comprehensive descriptions of
the submitted algorithms linked to their rank as well as the
underlying participation strategies revealed common char-
acteristics of winning solutions. These typically include
the use of multi-task learning (63%) and/or multi-stage
pipelines (61%), and a focus on augmentation (100%), im-
age preprocessing (97%), data curation (79%), and post-
processing (66%). The “typical” lead of a winning team
is a computer scientist with a doctoral degree, five years of
experience in biomedical image analysis, and four years of
experience in deep learning. Two core general development
strategies stood out for highly-ranked teams: the reflection
of the metrics in the method design and the focus on analyz-
ing and handling failure cases. According to the organizers,
43% of the winning algorithms exceeded the state of the art
but only 11% completely solved the respective domain prob-
lem. The insights of our study could help researchers (1)
improve algorithm development strategies when approach-
ing new problems, and (2) focus on open research questions
revealed by this work.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the CVPR conference |
Publikationsdato | 2023 |
DOI | |
Status | Udgivet - 2023 |
Begivenhed | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - Vancouver Convention Center , Vancouver, Canada Varighed: 18 jun. 2023 → 22 jun. 2023 https://cvpr.thecvf.com/Conferences/2023 |
Konference
Konference | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
---|---|
Lokation | Vancouver Convention Center |
Land/Område | Canada |
By | Vancouver |
Periode | 18/06/2023 → 22/06/2023 |
Internetadresse |