Today’s business processes involve interactions with increasingly human-like algorithmic assistants based on artificial intelligence (AI). The prevalence of such anthropomorphized algorithmic assistants introduces a predicament for managers because it is often unclear to what extent and for which purposes algorithmic assistants can and will be trustedby customers and employees. In the near future, almost half of today’s work activities could be automated (Frey & Osborne, 2017), and as AI is increasingly changing how we work, and thereby the very fabric of our organizations (Faraj et al., 2018), it becomes ever more salient for managers to establish how human agents can interact with AI assistants in a reliable and trustworthy manner. In this paper, we examine how anthropomorphization of algorithmic assistants affect trust by human agents. Interestingly, our preliminary findings suggest that affective capabilities increase trust not only in the algorithm’s affective capabilities, but also in their cognitive capability. While the application of such AI-based algorithms increase efficiency in work processes and thereby economic output (Brynjolfsson & McAfee, 2014) and eliminates human bias in decision-making (McAfee and Brynjolfsson 2012), the long-term consequences of interactions between human and algorithmic agents in organizations are sparsely understood (Mankins & Sherer, 2014; Newell & Marabelli, 2015).
|Publication status||Published - 2019|
|Event||Pre-ICIS paper development workshop on the JAIS-MISQE joint special issue on "Artificial Intelligence in Organizations: Opportunities for Management and Implications for IS Research” - Munich, Germany|
Duration: 9 Dec 2019 → 9 Dec 2019
|Workshop||Pre-ICIS paper development workshop on the JAIS-MISQE joint special issue on "Artificial Intelligence in Organizations: Opportunities for Management and Implications for IS Research”|
|Period||09/12/2019 → 09/12/2019|