Abstract
Recognizing that artificial intelligence (AI) systems come with potentially limited explainability, interest in how to formulate AI explanations for medical experts is growing. While prior research lays a basic understanding of the preferences for such experts, they fall short of addressing a context which is shown to be in particular need of explanations: The case of AI systems providing a misdiagnosis. To address this gap, an online experiment with medical experts (n=202) to investigate the effectiveness of explanations on experts' detection of the misdiagnosis was conducted. The preliminary results indicate that feature attribution explanations are most efficient for detecting misdiagnoses. The aim of the overall research project is to contribute to literature by providing a cognitive psychology lens for understanding differences in the perception of explanations and to practice by giving medical managers a highly valuable guidance for which explanations to implement for this very critical medical context.
Original language | English |
---|---|
Title of host publication | ECIS 2024 Proceedings |
Publication date | 2024 |
Publication status | Published - 2024 |
Externally published | Yes |
Event | The 32nd European Conference on Information Systems - Paphos, Cyprus Duration: 17 Jun 2024 → 19 Jun 2024 https://ecis2024.eu/ |
Conference
Conference | The 32nd European Conference on Information Systems |
---|---|
Country/Territory | Cyprus |
City | Paphos |
Period | 17/06/2024 → 19/06/2024 |
Internet address |
Keywords
- AI Explanations
- Medical Explainable AI
- Toulmin´s Model
- AI Misdiagnosis