•  
  •  
 

Publication Date

12-15-2025

Abstract

This research investigates explainable artificial intelligence (XAI) integration within machine learning (ML)-based intrusion detection systems (IDS), focusing on distinguishing malicious from benign network activities. We employed Random Forest and XGBoost models evaluated on widely recognized datasets, including NSL-KDD and UNSW-NB15, using both binary and multi-class classification tasks. The objective was to enhance cybersecurity operations through improved model transparency and interpretability. By integrating SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), the study offers comprehensive global and local insights into model decision-making processes. Results demonstrate SHAP's effectiveness in providing a broad, dataset-wide understanding of feature interactions and importance, while LIME facilitates targeted analysis of specific misclassified instances, thereby identifying potential biases. This dual approach advances the state-of-the-art by enabling security analysts to effectively interpret AI-driven decisions, reduce false positives, and refine response strategies. The integration of XAI into IDS frameworks significantly improves system reliability, transparency, and practical applicability in real-world cybersecurity operations. Our findings provide actionable insights for security practitioners, demonstrating how explainable AI can enhance operational decision-making and improve the effectiveness of intrusion detection systems in production environments. The study also presents a practical platform that enables security teams to interact with and interpret model predictions, bridging the gap between theoretical research and operational cybersecurity practice.

Share

COinS