Pieter Barnard

and 2 more

Explainable artificial intelligence (XAI) methodologies can demystify the behavior of machine learning (ML) "black-box" models based on the individual impact each feature has on the model's output. In the cybersecurity domain, explanations of this type have been studied from the perspective of a humanin-the-loop, where they serve an essential role in building trust from stakeholders, as well as aiding practitioners in tasks such as feature selection and model debugging. However, another important but largely overlooked use case of explanations emerges when they are passed as inputs into other ML models. In this sense, the rich information encompassed by the explanations can be harnessed at a fine-grained level and used to subsequently enhance the performance of the system. In this work, we outline a general methodology whereby explanations of a front-end network intrusion detection system (NIDS) are leveraged alongside additional ML to automatically improve the system's overall performance. We demonstrate the robustness of our methodology by evaluating its performance across multiple intrusion datasets and perform an in-depth analysis to assess its generalizability under various conditions, such as unseen environments, varying types of front-end NIDS and XAI methodologies, etc. The overall results indicate the efficacy of our methodology to produce significant improvement gains (i.e., up to +36% gains achieved across the considered metrics and datasets) that exceed those achieved by other state-of-the-art intrusion models from the literature.