Abstract
There are many examples of problems resulting from inscrutable AI systems, so there is a growing need to be able to explain how such systems produce their outputs. Draw- ing on a case study at the Danish Business Authority, we provide a framework and recommendations for addressing the many challenges of explaining the behavior of black-box AI systems. Our findings will enable organizations to successfully develop and deploy AI systems without causing legal or ethical problems.
| Originalsprog | Engelsk |
|---|---|
| Artikelnummer | 7 |
| Tidsskrift | M I S Quarterly Executive |
| Vol/bind | 19 |
| Udgave nummer | 4 |
| Sider (fra-til) | 259-278 |
| ISSN | 1540-1960 |
| DOI | |
| Status | Udgivet - 1 dec. 2020 |
Emneord
- Artificial intelligence
- Explainability
- Envelopment