Presentation Type
Article
Location
Kennesaw, Georgia
Start Date
1-4-2026 1:45 PM
End Date
1-4-2026 3:00 PM
Description
Predictive maintenance in manufacturing demands more than accurate failure forecasting. Maintenance engineers require transparent and auditable decision rationale to act confidently. Industrial digital twin environments produce rich and continuous streams of sensor data. Existing AI models typically surface only numerical anomaly scores without contextual explanation. This paper presents XAgentMaint, an explainable agentic AI framework for predictive maintenance. The system integrates a Retrieval-Augmented Generation knowledge base with a ReAct reasoning agent. An XGBoost failure classifier serves as the core prediction component throughout the pipeline. All three components are evaluated on the NASA CMAPSS turbofan degradation dataset across all four sub-datasets (FD001-FD004), spanning single-fault and multi-fault scenarios. The RAG layer indexes 1,800 maintenance records and six failure mode documents for retrieval. Retrieved evidence translates classifier predictions into natural language explanations for engineers. A logged decision trail accompanies each explanation to support regulatory auditability. XAgentMaint achieves a Root Mean Square Error of 14.8 Remaining Useful Life cycles on FD001 and demonstrates consistent performance under multi-fault conditions. A systematic ablation study isolates the contribution of each system component: the XGBoost classifier, RAG retrieval layer, and ReAct reasoning agent. Comparative analysis against SHAP-based explanation pipelines demonstrates that XAgentMaint produces explanations rated significantly more actionable by domain practitioners. Explanation fidelity reaches 83.7% as rated by three independent domain expert evaluators. A 22-participant user study measures the practical impact of explanations on engineer performance. The XAgentMaint group completes diagnostic tasks 28% faster than the numerical-score-only group. This difference is statistically significant with p < 0.01 across all tested scenarios. Discussion of locally deployable language models and real industrial maintenance log integration addresses key deployment considerations for production environments.
XAgentMaint: Explainable Agentic Systems for Predictive Maintenance in Industrial Digital Twin Environments
Kennesaw, Georgia
Predictive maintenance in manufacturing demands more than accurate failure forecasting. Maintenance engineers require transparent and auditable decision rationale to act confidently. Industrial digital twin environments produce rich and continuous streams of sensor data. Existing AI models typically surface only numerical anomaly scores without contextual explanation. This paper presents XAgentMaint, an explainable agentic AI framework for predictive maintenance. The system integrates a Retrieval-Augmented Generation knowledge base with a ReAct reasoning agent. An XGBoost failure classifier serves as the core prediction component throughout the pipeline. All three components are evaluated on the NASA CMAPSS turbofan degradation dataset across all four sub-datasets (FD001-FD004), spanning single-fault and multi-fault scenarios. The RAG layer indexes 1,800 maintenance records and six failure mode documents for retrieval. Retrieved evidence translates classifier predictions into natural language explanations for engineers. A logged decision trail accompanies each explanation to support regulatory auditability. XAgentMaint achieves a Root Mean Square Error of 14.8 Remaining Useful Life cycles on FD001 and demonstrates consistent performance under multi-fault conditions. A systematic ablation study isolates the contribution of each system component: the XGBoost classifier, RAG retrieval layer, and ReAct reasoning agent. Comparative analysis against SHAP-based explanation pipelines demonstrates that XAgentMaint produces explanations rated significantly more actionable by domain practitioners. Explanation fidelity reaches 83.7% as rated by three independent domain expert evaluators. A 22-participant user study measures the practical impact of explanations on engineer performance. The XAgentMaint group completes diagnostic tasks 28% faster than the numerical-score-only group. This difference is statistically significant with p < 0.01 across all tested scenarios. Discussion of locally deployable language models and real industrial maintenance log integration addresses key deployment considerations for production environments.