Abstract
We target the automatic classification of fractures from clinical X-Ray images following the Arbeitsgemeinschaft Osteosynthese (AO) classification standard. We decompose the problem into the localization of the region-of-interest (ROI) and the classification of the localized region. Our solution relies on current advances in multi-task end-to-end deep learning. More specifically, we adapt an attention model known as Spatial Transformer (ST) to learn an image-dependent localization of the ROI trained only from image classification labels. As a case study, we focus here on the classification of proximal femur fractures. We provide a detailed quantitative and qualitative validation on a dataset of 1000 images and report high accuracy with regard to inter-expert correlation values reported in the literature.
Originalsprog | Engelsk |
---|---|
Titel | MLMI 2017: Machine Learning in Medical Imaging |
Vol/bind | 10541 |
Publikationsdato | 7 sep. 2017 |
ISBN (Trykt) | 978-3-319-67388-2 |
ISBN (Elektronisk) | 978-3-319-67389-9 |
DOI | |
Status | Udgivet - 7 sep. 2017 |
Udgivet eksternt | Ja |
Emneord
- label image classification
- spatial transformer network
- classification model parameters
- trauma surgery department
- adaptive network architecture