Skip to main content

Table 4 Test accuracies showing the impact of adversarial training, clean refers to the original testing data, FGSM refers to adversary examples derived from Fast Gradient Sign Method and PGD refers to adversarial examples derived from Projected Gradient Descent [83]

From: A survey on Image Data Augmentation for Deep Learning

Models

MNIST

CIFAR-10

Clean

FGSM

PGD

Clean

FGSM

PGD

Standard

0.9939

0.0922

0

0.9306

0.5524

0.0256

Adversarially trained

0.9932

0.9492

0.0612

0.8755

0.8526

0.1043

Our method

0.9903

0.9713

0.9171

0.8714

0.6514

0.3440