Abstract
In this paper we investigate how people’s level of trust (as reported through self-assessment) in so-called “AI” (artificial intelligence) is influenced by anthropomorphizing language in system descriptions. Building on prior work, we define four categories of anthropomorphization (1. Properties of a cognizer, 2. Agency, 3. Biological metaphors, and 4. Properties of a communicator). We use a survey-based approach (𝑛=954) to investigate whether participants are likely to trust one of two (fictitious) “AI” systems by randomly assigning people to see either an anthropomorphized or a de-anthropomorphized description of the systems. We find that participants are no more likely to trust anthropomorphized over de-anthropmorphized product descriptions overall. The type of product or system in combination with different anthropomorphic categories appears to exert greater influence on trust than anthropomorphizing language alone, and age is the only demographic factor that significantly correlates with people’s preference for anthropomorphized or de-anthropomorphized descriptions. When
elaborating on their choices, participants highlight factors such as lesser of two evils, lower or higher stakes contexts, and human favoritism as driving motivations when choosing between product A and B, irrespective of whether they saw an anthropomorphized or a de-anthropomorphized description of the product. Our results suggest that “anthropomorphism” in “AI” descriptions is an aggregate concept that may influence different groups differently, and provide
nuance to the discussion of whether anthropomorphization leads to higher trust and over-reliance by the general public in systems sold as “AI”.
elaborating on their choices, participants highlight factors such as lesser of two evils, lower or higher stakes contexts, and human favoritism as driving motivations when choosing between product A and B, irrespective of whether they saw an anthropomorphized or a de-anthropomorphized description of the product. Our results suggest that “anthropomorphism” in “AI” descriptions is an aggregate concept that may influence different groups differently, and provide
nuance to the discussion of whether anthropomorphization leads to higher trust and over-reliance by the general public in systems sold as “AI”.
Original language | English |
---|---|
Title of host publication | Proceedings of the ACM Conference on Fairness, Accountability, and Transparency |
Number of pages | 26 |
Publisher | Association for Computing Machinery |
Publication date | 3 Jun 2024 |
Pages | 2322-2347 |
DOIs | |
Publication status | Published - 3 Jun 2024 |