Location

https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php

Streaming Media

Document Type

Event

Start Date

24-11-2025 4:00 PM

Description

AI-driven hiring tools are reshaping recruitment but often mirror biases in their training data. PRISM examines how large language models express or reduce demographic bias during resume evaluation and how linguistic context within prompts shapes these outcomes. Using a controlled dataset of 324 synthetic resumes with racially neutral surnames, differing only by first name as the demographic proxy, we compared GPT 3.5 turbo with a Sentence BERT similarity model. Under neutral prompts, no stable bias was observed across demographic groups, yet contextual shifts in the prompt changed how the model responded to proxy cues. These findings show that LLM bias can be either activated or suppressed depending on prompt framing, highlighting the significance of context-aware prompt design in improving fairness.

Share

COinS
 
Nov 24th, 4:00 PM

GRM-20169 Proxy Recognition and Inclusive Scoring Method (PRISM): Evaluating Context-Dependent Bias in Large Language Models for Resume Screening

https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php

AI-driven hiring tools are reshaping recruitment but often mirror biases in their training data. PRISM examines how large language models express or reduce demographic bias during resume evaluation and how linguistic context within prompts shapes these outcomes. Using a controlled dataset of 324 synthetic resumes with racially neutral surnames, differing only by first name as the demographic proxy, we compared GPT 3.5 turbo with a Sentence BERT similarity model. Under neutral prompts, no stable bias was observed across demographic groups, yet contextual shifts in the prompt changed how the model responded to proxy cues. These findings show that LLM bias can be either activated or suppressed depending on prompt framing, highlighting the significance of context-aware prompt design in improving fairness.