Document Type

Article

Publication Date

5-9-2026

Abstract

As AI systems achieve near-perfect accuracy in high-stakes decision-making domains, governance frameworks risk overlooking a critical paradox: superior AI reliability may render human oversight ceremonial rather than functional. This paper presents a theoretically grounded experimental research design to investigate Automation Complacency—the cognitive degradation of human vigilance triggered by sustained streaks of accurate AI outputs. Grounded in Dual-Process Theory and the competency trap literature, we hypothesize that consistent AI reliability shifts operators from analytical (System 2) to heuristic (System 1) processing, significantly impairing error detection when AI eventually fails. Using a simulated phishing detection task (N=200), we plan to test how AI reliability streaks affect human verification accuracy and propose three governance contributions: reframing HITL as a dynamic rather than static control, designing artificial friction mechanisms, and reassigning accountability from operators to system architects. This research offers a critical course correction for AI governance policy.

Share

COinS