Semester of Graduation
Spring 2026
Degree Type
Dissertation/Thesis
Degree Name
International Conflict Management
Department
School of Conflict Management, Peacebuilding and Development
Committee Chair/First Advisor
Volker Franke
Second Advisor
Aaron M. French
Third Advisor
Darina Lepadatu
Fourth Advisor
Gregory Phelan
Abstract
Conflict-based high-stakes decisions are often made under uncertainty, time pressure, and cognitive constraint. In such environments, decision makers rarely optimize and instead rely on satisficing strategies shaped by bounded rationality and incomplete information. Although generative artificial intelligence (GenAI) is increasingly integrated into decision processes, limited empirical research explains how AI assistance alters cognitive effort, decision thresholds, and heuristic reliance under constraint. Drawing on bounded rationality theory and dual-process models, this dissertation examines whether GenAI reduces perceived cognitive effort, increases satisficing behavior, and shifts reliance toward heuristic processing in conflict-relevant decisions. Using a quasi-experimental 2 × 2 factorial design, 671 participants are randomly assigned to GenAI-assisted and control conditions crossed with two-time constraints across two conflict-based vignettes. Methodologically, the study moves beyond sole reliance on Cronbach’s α by adopting a multidimensional validation strategy that integrates inferential testing, clustering, and exploratory machine learning. It introduces a new Deliberative Decision-Making Index, refined measures of AI trust and decision confidence to capture changes in decision thresholds under AI assistance. Results provide limited but directionally consistent evidence that perceived GenAI reliability is associated with reduced analytic engagement under certain temporal conditions. The findings suggest that AI trust may function as a cognitive enabler of satisficing rather than merely an attitudinal disposition. Qualitative analysis results further indicate that GenAI operates as cognitive scaffolding in conflict decision-making, structuring information and shaping decision closure under uncertainty. These findings have implications for the governance and design of AI-assisted decision systems in conflict-prone environments.
Included in
Behavioral Economics Commons, Cognitive Science Commons, Data Science Commons, Management Sciences and Quantitative Methods Commons, Science and Technology Studies Commons, Technology and Innovation Commons