-
Unravelling Complexity: Investigating the Effectiveness of SHAP Algorithm for Improving Explainability in Network Intrusion System Across Machine and Deep Learning Models
- Lakshya Vaswani, Sai Sri Harsha, Subham Jaiswal, and Aju D
-
2024, 20(7):
421-431.
doi:10.23940/ijpe.24.07.p2.421431
-
Abstract
PDF (671KB)
-
References |
Related Articles
According to several studies, it is feasible to significantly raise the detection engine’s effectiveness and accuracy by choosing the right features for a threat detection system. New advances like Distributed Computing and Enormous Information have expanded network traffic, and the danger identification framework must proactively gain and dissect the information delivered by the approaching traffic. Nonetheless, not all elements in an enormous dataset help to portray the traffic, therefore restricting and picking few reasonable highlights might accelerate and improve the danger discovery framework’s exactness. Deep neural networks enhance the detection rates of intrusion detection models, making machine learning-based intrusion detection systems (IDS’s) useful recently. Consumers, however, find it more and more challenging to comprehend the reasoning behind their selections as models become more complex accuracy. Using relevant features from the NSL-KDD dataset, we apply appropriate feature selection mechanisms to implement an intrusion detection system to implement a faster system with increased accuracy. We use Explainable Model (SHAP) to interpret the results of IDS. The interpretation of findings utilizing the Explainable Model (SHAP) for machine learning (ML) and deep learning (DL) models heavily depends on its efficiency. While DL models require more resources, ML models are computationally efficient. Both models, however, gain from SHAP interpretations, which offer perceptions into the significance of features and contributions to predictions. While DL models excel in accuracy, ML models offer efficiency. The decision is based on the particular needs and resources that are available, with SHAP offering greater knowledge of model behavior and feature impact.