The Effect of AI Explanations on Medical Experts Detecting Misdiagnosis by AI Systems

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Recognizing that artificial intelligence (AI) systems come with potentially limited explainability, interest in how to formulate AI explanations for medical experts is growing. While prior research lays a basic understanding of the preferences for such experts, they fall short of addressing a context which is shown to be in particular need of explanations: The case of AI systems providing a misdiagnosis. To address this gap, an online experiment with medical experts (n=202) to investigate the effectiveness of explanations on experts' detection of the misdiagnosis was conducted. The preliminary results indicate that feature attribution explanations are most efficient for detecting misdiagnoses. The aim of the overall research project is to contribute to literature by providing a cognitive psychology lens for understanding differences in the perception of explanations and to practice by giving medical managers a highly valuable guidance for which explanations to implement for this very critical medical context.
Original languageEnglish
Title of host publicationECIS 2024 Proceedings
Publication date2024
Publication statusPublished - 2024
Externally publishedYes
EventThe 32nd European Conference on Information Systems - Paphos, Cyprus
Duration: 17 Jun 202419 Jun 2024
https://ecis2024.eu/

Conference

ConferenceThe 32nd European Conference on Information Systems
Country/TerritoryCyprus
CityPaphos
Period17/06/202419/06/2024
Internet address

Keywords

  • AI Explanations
  • Medical Explainable AI
  • Toulmin´s Model
  • AI Misdiagnosis

Fingerprint

Dive into the research topics of 'The Effect of AI Explanations on Medical Experts Detecting Misdiagnosis by AI Systems'. Together they form a unique fingerprint.

Cite this