Skip to main content

Advertisement

Fig. 1 | Journal of Big Data

Fig. 1

From: Ally patches for spoliation of adversarial patches

Fig. 1

An exemplar image to illustrate the effect of the proposed method. a An attack-free input image. The convolutional neural network succeeded in classifying the image in compliance to the ground truth with confidence close to 100%. b The input image after having been attacked by a toaster-targeted adversarial patch generated by [3]. The adversarial patch succeeded in driving the network to hallucination and to classify the input image to the adversarial target with a confidence value of almost 100%. c Classification using the proposed ally patches. The generated patch alliance could ravage the operation of the adversarial patch and flee from the fake saliency caused by it. For illustration clarity purposes, the confidence values of the top-3 labels only are shown. The used CNN model for this example is VGG19 [22]

Back to article page