Project Title: Communication and Trust Formation in Generative AI Interaction.
Project Description:
Communication, such as making promises about future actions, are fundamental to forming trust in both formal and informal human relationships. Behavioural experiments have consistently demonstrated that humans often put aside self-interest in order to keep their past non-binding promises. Predominant explanatory mechanism for such human trust relies on the negative reinforcement of psychological guilt or guilt-aversion resulting from untrustworthy behaviour. However, generative AI systems lack neurobiological responses such as guilt, raising critical questions about the mechanism of trust formation in human-AI interactions. As AI systems become integral to our social and economic environments, it is imperative to investigate how trust is established between humans and AI, and how it differs from human-human interactions.
The goal of the project is to examine the role of communication in trust formation across three relational contexts: human-human, human-AI, and AI-AI interactions, and across three simulated situational contexts of economic, social, and robot interactions. The study employs a multifactorial approach with the relational and situations contexts as the main factors. The anticipated outcomes of this project is three fold:
1. experimental findings that highlight the differences in human and generative AI communication for the purposes of trust formation
2. peer-reviewed journal submission detailing the findings on the nature of these differences
3. Fostering international collaboration with European partners, setting the foundation for future EU Horizon grant applications.
The study uses a mixed-method approach with an adaptation of the repeated trust game as the core experimental paradigm. Participants in a repeated trust game communicate in natural language in order to establish and maintain trust in a repeated play of a strategic game where there is an incentive for one participant to be untrustworthy due to self-interest. By analyzing the nature of the communication and their relation to the strategic choices made by participants, Bowen and Baruah are establishing the role of communication in trust formation.
Experimental design: Since contextual factors influence trust formation, they are adapting the repeated trust game to three contextual settings:
1. Semi-formal economic partnership, which simulates trust dynamic resulting form verbal contracts
2. Informal social interaction, which simulates everyday casual interactions
3. Robot interaction, which simulates trust formation between a human and a simulated household robot. The experiment is also being conducted in two arms:
Control Arm: This arm establishes the baseline human trust data for each of the contexts using the oTree framework. Participants for the informal and human-AI contexts will be recruited through the Prolific crowdsourcing platform, while existing literature will provide data for the economic partnership context.
AI arm: Each context is replicated with large language model (LLM)-based generative AI agents assuming participant roles.
Analysis: The project is being undertaken with a mixed-method sequential analysis, beginning with qualitative analysis followed by the quantitative analysis. Bowen and Baruah use thematic analysis for qualitative data of the natural language communication, and the resultant themes act as independent variables in the quantitative analysis of the strategy choices.