Presenter Information

Location

https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php

Document Type

Event

Start Date

22-4-2026 4:00 PM

Description

Large language models in enterprise settings often produce structurally invalid outputs when users communicate informally, creating silent failure modes that pass unnoticed in downstream systems. This study investigates how prompt variation alone impacts schema compliance and output reliability. We evaluate three architectures across two tasks and four prompt styles, isolating the effect of interaction style on model behavior. Results show that baseline systems achieve 100% compliance under structured prompts but fail completely under ambiguous and casual inputs. A minimal reliability pipeline consisting of generation, self critique, and schema validation restores 100% compliance across all conditions at a predictable computational cost.

Share

COinS
 
Apr 22nd, 4:00 PM

GRM-174-228 Mitigating Prompt-Induced Variability in LLM Outputs

https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php

Large language models in enterprise settings often produce structurally invalid outputs when users communicate informally, creating silent failure modes that pass unnoticed in downstream systems. This study investigates how prompt variation alone impacts schema compliance and output reliability. We evaluate three architectures across two tasks and four prompt styles, isolating the effect of interaction style on model behavior. Results show that baseline systems achieve 100% compliance under structured prompts but fail completely under ambiguous and casual inputs. A minimal reliability pipeline consisting of generation, self critique, and schema validation restores 100% compliance across all conditions at a predictable computational cost.