Date of Award

Fall 12-2-2024

Degree Type

Dissertation

Degree Name

Doctor of Philosophy in Business Administration

Department

Information Systems

Committee Chair/First Advisor

Dr. Adriane Randolph

Second Advisor

Dr. Reza Vaezi

Third Advisor

Dr. Soo Il Shin

Abstract

Despite the general acceptance of trust as a key determinant of user intentions towards technological innovations, modeling trust in human-computer interactions remains a challenge in information systems research, where some studies have revealed disparities in the conceptualization and measurement of trust and the drivers of trust. This explains why despite, an emerging array of guidelines on how to design, develop, and deploy "trustworthy AI" in healthcare, there are still gaps in understanding the nuanced aspects of trust and trust building on users' acceptance of artificial intelligence (AI) driven healthcare decisions and recommendations.

To bridge this gap, our research focuses on the underlying mechanisms and antecedents that could be used to explain the trust-building process in human-AI agent interactions, which considers factors beyond system attributes and performance. We argue that organizations developing and implementing these tools need to be accountable for how these tools are integrated into their processes and propose a model that incorporates key institutional trust-related factors to examine the dynamics of trust formation, healthcare professional oversight, and disease severity on individual willingness to accept recommendations from conversational AI agents in the specific context of healthcare.

Our research model predicts that the institutional factors of situational normality, structural assurance, cognitive reputation, and the human factor of healthcare professional oversight in the AI lifecycle design and implementation will increase trusting beliefs, and impact an individual’s willingness to accept AI recommendations for their care. The study also explored the moderating role of disease severity on the relationship between trusting beliefs and willingness to accept recommendations for care from the AI agent.

The model was tested using a web survey to collect data from adults of at least 18 years of age across the United States. The data was analyzed using the Partial Least Square Structural Equation Modeling (PLS-SEM). The findings support the role of institutional factors as important antecedents of trusting beliefs and intentions but none on the perception of healthcare professional oversight as a human in the loop. Disease severity had no impact on the relationship between trusting beliefs and intention but showed a direct impact on intention. These findings bring attention to the importance of institutional factors in shaping trust, particularly in high-risk environments like healthcare. While previous literature has suggested that human oversight in the AI life cycle may be a critical factor in building trust and driving acceptance of the technology, the current findings indicate that acceptance may not always depend on the perception of human oversight, suggesting the need to reevaluate existing models that prioritize human involvement. The finding that disease severity had no moderating effect on the relationship between trusting beliefs and intention suggests that trusting beliefs have a consistent impact on individual intention toward AI recommendations, irrespective of the severity of the disease. The findings that disease severity had a significant direct impact emphasize the importance of considering how disease context can independently influence the acceptance of AI technologies in healthcare.

The study results contribute to the extant literature and practice in several ways. These findings extend existing theories of trust formation in AI technology acceptance by highlighting the importance of institutional factors in building trust in high-risk environments like healthcare. The finding that human professional oversight, as a form of human involvement in AI design, implementation, and use, had no significant impact on trusting beliefs calls for a reevaluation of the role of human oversight in trust models and challenges the Human-in-the-Loop (HITL) model for AI Trust. The findings also suggest that healthcare institutions that prioritize, create, and communicate strong institutional safeguards and assurances contribute to driving trust among users of its technology systems.

Available for download on Saturday, December 04, 2027

Share

COinS