Download PDFOpen PDF in browser

Adversarial Machine Learning for Robust Security Systems

EasyChair Preprint 14569

11 pagesDate: August 28, 2024

Abstract

Adversarial machine learning (AML) explores the vulnerabilities of machine learning (ML) systems to carefully crafted input perturbations, which can undermine their reliability and security. This paper presents a comprehensive review of adversarial techniques and their implications for the development of robust security systems. We begin by detailing the theoretical foundations of adversarial attacks, including gradient-based and optimization-based methods, and examine how these attacks can exploit weaknesses in various ML models. Next, we explore defensive strategies designed to enhance the resilience of ML systems against adversarial threats, such as adversarial training, defensive distillation, and input preprocessing. We also address the trade-offs involved in implementing these defenses, including potential impacts on model performance and computational efficiency. Furthermore, the paper discusses emerging trends and future research directions in adversarial machine learning, highlighting the need for innovative solutions to address evolving attack vectors. By providing a critical overview of current methods and challenges, this paper aims to advance the development of secure ML systems capable of withstanding adversarial manipulation and ensuring reliable operation in real-world scenarios.

Keyphrases: Adversarial machine learning (AML), adversarial training, reliability and security

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14569,
  author    = {Favour Olaoye and Axel Egon},
  title     = {Adversarial Machine Learning for Robust Security Systems},
  howpublished = {EasyChair Preprint 14569},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser