Download PDFOpen PDF in browser

Advanced Techniques for Strengthening Adversarial Robustness in Deep Learning Models

EasyChair Preprint 15624

12 pagesDate: December 23, 2024

Abstract

Adversarial attacks represent a critical challenge to the reliability and security of machine learning systems, especially deep learning models. This paper delves into cutting-edge adversarial defense strategies, emphasizing adversarial training, robust optimization, and input preprocessing techniques. Through comprehensive analysis on various datasets, we assess the effectiveness of these methods using key performance metrics and robustness indicators. Furthermore, we introduce a novel hybrid approach that integrates adversarial augmentation with adaptive loss functions, aiming to improve model robustness without compromising accuracy.

Keyphrases: Algorithms, deep learning, machine learning, model

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15624,
  author    = {Leonardo Delviz and John Francis and Mo Chen and Hou Zhang and Michael Lornwood},
  title     = {Advanced Techniques for Strengthening Adversarial Robustness in Deep Learning Models},
  howpublished = {EasyChair Preprint 15624},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser