Abstract
The paper presents an approach for implementing inscrutable (i.e., nonexplainable) artificial intelligence (AI) such as neural networks in an accountable and safe manner in organizational settings. Drawing on an exploratory case study and the recently proposed concept of envelopment, it describes a case of an organization successfully “enveloping” its AI solutions to balance the performance benefits of flexible AI models with the risks that inscrutable models can entail. The authors present several envelopment methods—establishing clear boundaries within which the AI is to interact with its surroundings, choosing and curating the training data well, and appropriately managing input and output sources—alongside their influence on the choice of AI models within the organization. This work makes two key contributions: It introduces the concept of sociotechnical envelopment by demonstrating the ways in which an organization’s successful AI envelopment depends on the interaction of social and technical factors, thus extending the literature’s focus beyond mere technical issues. Secondly, the empirical examples illustrate how operationalizing a sociotechnical envelopment enables an organization to manage the trade-off between low explainability and high performance presented by inscrutable models. These contributions pave the way for more responsible, accountable AI implementations in organizations, whereby humans can gain better control of even inscrutable machine-learning models.
Originalsprog | Engelsk |
---|---|
Tidsskrift | Journal of the Association for Information Systems (JAIS) |
Vol/bind | Volume 22 |
Udgave nummer | Issue 2 |
Sider (fra-til) | 325-252 |
Antal sider | 28 |
ISSN | 1536-9323 |
DOI | |
Status | Udgivet - 9 mar. 2021 |
Emneord
- Artificial Intelligence
- Explainable AI
- XAI
- Envelopment
- Sociotechnical Systems
- Machine Learning
- Public Sector