Download PDFOpen PDF in browser

Explainable AI for Security Analysts: Enhancing Cybersecurity with Machine Learning Models

EasyChair Preprint 14006

13 pagesDate: July 17, 2024

Abstract

This abstract provides an overview of the effectiveness of machine learning models in the field of cybersecurity and highlights the importance of explainable AI in empowering security analysts. With the increasing complexity and sophistication of cyber threats, organizations are turning to advanced technologies, such as machine learning, to enhance their defense mechanisms. However, the black-box nature of traditional machine learning algorithms hinders their adoption in security operations. This paper explores the concept of explainable AI and its potential to address this limitation by providing interpretable insights into the decision-making processes of machine learning models. By improving transparency and accountability, explainable AI equips security analysts with the necessary tools to better understand, validate, and trust the outputs of these models. Through an examination of current research and industry practices, this study underscores the significance of explainable AI in facilitating effective collaboration between humans and machine learning algorithms, ultimately bolstering cybersecurity efforts.

Keyphrases: Algorithms, learning, machine

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14006,
  author    = {Kaledio Potter and Favour Olaoye and Lucas Doris},
  title     = {Explainable AI for Security Analysts: Enhancing Cybersecurity with Machine Learning Models},
  howpublished = {EasyChair Preprint 14006},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser