Skip to main content

Adversarial Attacks for PyTorch

Project description

Adversarial-Attacks-Pytorch

This is a lightweight repository of adversarial attacks for Pytorch.

There are popular attack methods and some utils.

Here is a documentation for this package.

Table of Contents

  1. Usage
  2. Attacks and Papers
  3. Demos
  4. Update Records

Usage

Dependencies

  • torch 1.2.0
  • python 3.6

Installation

  • pip install torchattacks or
  • git clone https://github.com/Harry24k/adversairal-attacks-pytorch
import torchattacks
pgd_attack = torchattacks.PGD(model, eps = 4/255, alpha = 8/255)
adversarial_images = pgd_attack(images, labels)

Precautions

  • WARNING :: All images should be scaled to [0, 1] with transform[to.Tensor()] before used in attacks.
  • WARNING :: All models should return ONLY ONE vector of (N, C) where C = number of classes.

Attacks and Papers

The papers and the methods with a brief summary and example. All attacks in this repository are provided as CLASS. If you want to get attacks built in Function, please refer below repositories.

  • Explaining and harnessing adversarial examples : Paper, Repo

    • FGSM
  • DeepFool: a simple and accurate method to fool deep neural networks : Paper

    • DeepFool
  • Adversarial Examples in the Physical World : Paper, Repo

    • BIM or iterative-FSGM
    • StepLL
  • Towards Evaluating the Robustness of Neural Networks : Paper, Repo

    • CW(L2)
  • Ensemble Adversarial Traning : Attacks and Defences : Paper, Repo

    • RFGSM
  • Towards Deep Learning Models Resistant to Adversarial Attacks : Paper, Repo

    • PGD(Linf)
  • Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network" : Paper

    • APGD(EOT + PGD)
Attack Clean Adversarial
FGSM
BIM
StepLL
RFGSM
CW
PGD(w/o random starts)
PGD(w/ random starts)
DeepFool

Demos

  • White Box Attack with Imagenet (code): To make adversarial examples with the Imagenet dataset to fool Inception v3. However, the Imagenet dataset is too large, so only 'Giant Panda' is used.

  • Black Box Attack with CIFAR10 (code): This demo provides an example of black box attack with two different models. First, make adversarial datasets from a holdout model with CIFAR10 and save it as torch dataset. Second, use the adversarial datasets to attack a target model.

  • Adversairal Training with MNIST (code): This demo shows how to do adversarial training with this repository. The MNIST dataset and a custom model are used in this code. The adversarial training is performed with PGD, and then FGSM is applied to test the model.

Update Records

~ Version 0.3

  • New Attacks : FGSM, IFGSM, IterLL, RFGSM, CW(L2), PGD are added.
  • Demos are uploaded.

Version 0.4

  • DO NOT USE : 'init.py' is omitted.

Version 0.5

  • Package name changed : 'attacks' is changed to 'torchattacks'.
  • New Attack : APGD is added.
  • attack.py : 'update_model' method is added.

Version 0.6

  • Error Solved :
    • Before this version, even after getting an adversarial image, the model remains evaluation mode.
    • To solve this, below methods are modified.
      • '_switch_model' method is added into attack.py. It will automatically change model mode to the previous mode after getting adversarial images. When getting adversarial images, model is switched to evaluation mode.
      • 'call' methods in all attack changed to forward. Instead of this, 'call' method is added into 'attack.py'
  • attack.py : To provide ease of changing images to uint8 from float, 'set_mode' and '_to_uint' is added.
    • 'set_mode' determines returning all outputs as 'int' OR 'flaot' through '_to_uint'.
    • '_to_uint' changes all outputs into uint8.

Version 0.7

  • All attacks are modified
    • clone().detach() is used instead of .data
    • torch.autograd.grad is used instead of .backward() and .grad :
      • It showed 2% reduction of computation time.

Version 0.8

  • New Attack : RPGD is added.
  • attack.py : 'update_model' method is depreciated. Because torch models are passed by call-by-reference, we don't need to update models.
    • cw.py : In the process of cw attack, now masked_select uses a mask with dtype torch.bool instead of a mask with dtype torch.uint8.

Version 0.9

  • New Attack : DeepFool is added.
  • Some attacks are renamed :
    • I-FGSM -> BIM
    • IterLL -> StepLL

Version 1.0

  • attack.py :
    • load : Load is depreciated. Instead, use TensorDataset and DataLoader.
    • save : The problem of calculating invalid accuracy when the mode of the attack set to 'int' is solved.

Version 1.1

Version 1.2

  • Description has been added for each module.
  • Sphinx Document uploaded
  • attack.py : 'device' will be decided by next(model.parameters()).device.
  • Two attacks are merged :
    • RPGD, PGD -> PGD

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchattacks-1.3-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file torchattacks-1.3-py3-none-any.whl.

File metadata

  • Download URL: torchattacks-1.3-py3-none-any.whl
  • Upload date:
  • Size: 15.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.4.2 requests/2.18.4 setuptools/39.1.0 requests-toolbelt/0.9.1 tqdm/4.32.1 CPython/3.6.5

File hashes

Hashes for torchattacks-1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 4c11eb4a9849e18f2f8a111f980abf73b3299411c79e49109b770b1851a7aa36
MD5 15ab126863786b26f2662c911c6cdd1f
BLAKE2b-256 3a1543d6010be0ef23ce21633749dae17cc264e8fbae1590c941c0b66ab6de50

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page