GRP-096-183 Early Warning Signals for Geopolitical Oil Shocks via Multi-Model NLP Sentiment Analysis
Location
https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php
Document Type
Event
Start Date
22-4-2026 4:00 PM
Description
The Strait of Hormuz carries roughly 20% of the world’s daily oil supply. Its closure on March 4, 2026, sent Brent crude surging 36.6%, from $74.64 to a peak of $118.35/barrel. Traditional time-series models fail during such unprecedented shocks because the historical price data contains no analog. This study evaluates whether NLP sentiment analysis can detect crisis signals in news text before they appear in prices, and whether agreement patterns across models predict volatility. We score 2,249 Guardian news articles using five deterministic sentiment models across three tiers: a lexicon baseline (VADER), a general-purpose transformer (RoBERTa-CardiffNLP), and three financial-domain transformers (FinBERT, FinBERT-Tone, DistilRoBERTa-Financial). Financial-domain models detected the crisis 47 days before closure, outperforming general-purpose models by 26 days. Article volume proved the strongest volatility predictor (r=0.76, p< 0.0001), while model consensus, not disagreement, signals crisis severity, inverting our original hypothesis. LSTM and XGBoost forecasting experiments show all model variants converging near a naive persistence baseline during the crisis, confirming that point prediction of unprecedented events remains fundamentally limited and reinforcing the value of upstream textual early warning.
Included in
GRP-096-183 Early Warning Signals for Geopolitical Oil Shocks via Multi-Model NLP Sentiment Analysis
https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php
The Strait of Hormuz carries roughly 20% of the world’s daily oil supply. Its closure on March 4, 2026, sent Brent crude surging 36.6%, from $74.64 to a peak of $118.35/barrel. Traditional time-series models fail during such unprecedented shocks because the historical price data contains no analog. This study evaluates whether NLP sentiment analysis can detect crisis signals in news text before they appear in prices, and whether agreement patterns across models predict volatility. We score 2,249 Guardian news articles using five deterministic sentiment models across three tiers: a lexicon baseline (VADER), a general-purpose transformer (RoBERTa-CardiffNLP), and three financial-domain transformers (FinBERT, FinBERT-Tone, DistilRoBERTa-Financial). Financial-domain models detected the crisis 47 days before closure, outperforming general-purpose models by 26 days. Article volume proved the strongest volatility predictor (r=0.76, p< 0.0001), while model consensus, not disagreement, signals crisis severity, inverting our original hypothesis. LSTM and XGBoost forecasting experiments show all model variants converging near a naive persistence baseline during the crisis, confirming that point prediction of unprecedented events remains fundamentally limited and reinforcing the value of upstream textual early warning.