Skip to Main content Skip to Navigation
Conference papers

L 1-norm double backpropagation adversarial defense

Abstract : Adversarial examples are a challenging open problem for deep neural networks. We propose in this paper to add a penalization term that forces the decision function to be at in some regions of the input space, such that it becomes, at least locally, less sensitive to attacks. Our proposition is theoretically motivated and shows on a rst set of carefully conducted experiments that it behaves as expected when used alone, and seems promising when coupled with adversarial training.
Complete list of metadata

Cited literature [7 references]  Display  Hide  Download
Contributor : Gaelle Loosli <>
Submitted on : Tuesday, March 5, 2019 - 8:44:47 AM
Last modification on : Wednesday, February 24, 2021 - 4:24:02 PM
Long-term archiving on: : Thursday, June 6, 2019 - 12:26:26 PM


Files produced by the author(s)


  • HAL Id : hal-02049020, version 1
  • ARXIV : 1903.01715


Ismaïla Seck, Gaëlle Loosli, Stephane Canu. L 1-norm double backpropagation adversarial defense. ESANN - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Apr 2019, Bruges, France. ⟨hal-02049020⟩



Record views


Files downloads