Exploring Sharing of Task Representations Among Human-AI Teams Using Gaze Analysis

Disciplines

Human Factors Psychology | Psychology | Social and Behavioral Sciences

Abstract (300 words maximum)

Research increasingly suggests that Artificial Intelligence (AI) should be considered a teammate rather than a mere tool. However, most studies on this topic rely on self-reports and interview responses, which are subject to biases and social desirability effects, as participants can control what they say. This study aimed to provide objective evidence on whether humans perceive AI and robots as teammates by analyzing attentional responses that are not easily controlled voluntarily. Participants performed a simulated surgical tool handoff task in a virtual environment. Each trial began with a text bubble over a surgeon’s shoulder, displaying a surgical tool in red, green, blue, or yellow. At the same time, four differently colored tools appeared on the table. Participants identified and handed off a surgical tool if the tool in the text bubble matched two of the four colors. A surgical technician robot performed the task alongside them in one block, responding to the remaining two colors. In the rest blocks, participants performed the task alone or with inactive robots, responding only to their assigned colors. In both cases, they ignored the tool if it appeared in a non-target color. We hypothesized that if humans genuinely perceive AI as a teammate, as they claim, participants would pay more attention to the tools in non-target colors when performing the task with the robot than when working alone. We analyzed participants’ glances at the tools and other data types. The results offer deeper insights into the true nature of human-AI relationships beyond what self-reports suggest.

Academic department under which the project should be listed

RCHSS - Psychological Science

Primary Investigator (PI) Name

Hansol Rheem

This document is currently not available here.

Share

COinS
 

Exploring Sharing of Task Representations Among Human-AI Teams Using Gaze Analysis

Research increasingly suggests that Artificial Intelligence (AI) should be considered a teammate rather than a mere tool. However, most studies on this topic rely on self-reports and interview responses, which are subject to biases and social desirability effects, as participants can control what they say. This study aimed to provide objective evidence on whether humans perceive AI and robots as teammates by analyzing attentional responses that are not easily controlled voluntarily. Participants performed a simulated surgical tool handoff task in a virtual environment. Each trial began with a text bubble over a surgeon’s shoulder, displaying a surgical tool in red, green, blue, or yellow. At the same time, four differently colored tools appeared on the table. Participants identified and handed off a surgical tool if the tool in the text bubble matched two of the four colors. A surgical technician robot performed the task alongside them in one block, responding to the remaining two colors. In the rest blocks, participants performed the task alone or with inactive robots, responding only to their assigned colors. In both cases, they ignored the tool if it appeared in a non-target color. We hypothesized that if humans genuinely perceive AI as a teammate, as they claim, participants would pay more attention to the tools in non-target colors when performing the task with the robot than when working alone. We analyzed participants’ glances at the tools and other data types. The results offer deeper insights into the true nature of human-AI relationships beyond what self-reports suggest.