Mitigating Discontinuance in Medical AI Systems: The Role of AI Explanations

Aycan Aslan, Maike Greve, Lutz Kolbe

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Despite significant advancements in medical artificial intelligence (AI) systems, these technologies are prone to mistake in their predictions. These mis- takes can significantly affect medical experts’ willingness to continue using these systems. To mitigate potential discontinuation, existing research indicates that providing additional information alongside predictions, can lessen negative out- comes like discontinuation. Given the potential impact on users’ information pro- cessing, we hypothesize that AI explanations, detailing the system's decision- making process, can also influence the likelihood of discontinuing use after an AI mistake. Through an online experiment with medical experts (n=227), we demonstrate that such explanations can influence medical experts’ information processing and, consequently, mitigate the adverse effects on the actual discon- tinuation of AI systems following a mistake.
Original languageEnglish
Title of host publicationWirtschaftsinformatik 2024 Proceedings
Number of pages16
Publication date2024
Publication statusPublished - 2024
Externally publishedYes

Keywords

  • Artificial intelligence
  • decision-making
  • explainability
  • discontinuance
  • medicine

Fingerprint

Dive into the research topics of 'Mitigating Discontinuance in Medical AI Systems: The Role of AI Explanations'. Together they form a unique fingerprint.

Cite this