Accès chercheur

EEDIS Laboratory

Evolutionary Engineering

and

Distributed Information Systems

Réseaux et Communication

Sécurité et Multimédia

Ingénierie des Connaissances

Data Mining & Web Intelligent

Interopérabilité des Systèmes d’information
& Bases de données

Développement Orienté Service

Adversarial Example Detection for DNN Models: A Review

Auteurs: » Aldahdooh Ahmed
» Hamidouche Wassim
» Deforges Olivier
Type : Chapitre de Livre
Edition : Source arXiv preprint arXiv:21 ISBN:
Lien : »
Publié le : 01-05-2021

DeepLearning (DL) has shown great success in many human-related tasks, which hasled to its adoption in many computer vision based applications, such assecurity surveillance system, autonomous vehicles and healthcare. Suchsafety-critical applications have to draw its path to success deployment oncethey have the capability to overcome safety-critical challenges. Among thesechallenges are the defense against or/and the detection of the adversarialexample (AE). Adversary can carefully craft small, often imperceptible, noisecalled perturbations, to be added to the clean image to generate the AE. Theaim of AE is to fool the DL model which makes it a potential risk for DLapplications. Many test-time evasion attacks and countermeasures, i.e., defenseor detection methods, are proposed in the literature. Moreover, few reviews andsurveys were published and theoretically showed the taxonomy of the threats andthe countermeasure methods with little focus in AE detection methods. In thispaper, we attempt to provide a theoretical and experimental review for AEdetection methods. A detailed discussion for such methods is provided andexperimental results for eight state-of-the-art detectors are presented underdifferent scenarios on four datasets. We also provide potential challenges andfuture perspectives for this research direction.


Tous droits réservés - © 2019 EEDIS Laboratory