Abstract :
[en] Purpose: This study explores how the format of explanations used in AI-based services affects consumer behavior, specifically the effects of explanation detail (low vs. high) and consumer control (automatic vs. on demand) on trust and acceptance. The aim is to provide service providers with insights into how to optimize the format of explanations to enhance consumer evaluations of AI-based services.
Design/methodology/approach: Drawing on the literature on explainable AI (XAI) and
information overload theory, a conceptual model is developed. To empirically test the conceptual model, two between-subjects experiments were conducted wherein the level of detail and level of control were manipulated, taking AI-based recommendations as a use case. The data were analyzed via partial least squares (PLS) regressions. Findings: The results reveal significant positive correlations between level of detail and perceived
understanding and between level of detail and perceived assurance. The level of control negatively moderates the relationship between the level of detail and perceived understanding. Further analyses revealed that the perceived competence and perceived integrity of AI systems positively and significantly influence the acceptance and purchase intentions of AI-based services.
Originality/value: This article elucidates the nuanced interplay between the level of detail and control over explanations for nonexpert consumers in high-credence service sectors. The findingsoffer insights for the design of more consumer-centric explanations to increase the acceptance of AI-based services.
Practical implications: This research offers service providers key insights into how tailored
explanations and maintaining a balance between detail and control build consumer trust and enhance AI-based service outcomes.
Scopus citations®
without self-citations
0