VIPL's paper on adversarial attack is accepted by TPAMI

Time: Feb 28, 2024

Congratulations! VIPL's paper on adversarial attack “Adaptive Perturbation for Adversarial Attack” (Authors: Zheng Yuan, Jie Zhang, Zhaoyan Jiang, Liangliang Li, Shiguang Shan) was accepted by IEEE TPAMI. IEEE PAMI, i.e., IEEE Transactions on Pattern Analysis and Machine Intelligence is a CCF-ranked-A top-tier Artificial Intelligence journal with a high IF score of 23.6, announced in 2023.

In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on L_norm. However, we find that the sign function may be improper for generating adversarial examples since it modifies the exact gradient direction. Instead of using the sign function, we propose to directly utilize the exact gradient direction with a scaling factor for generating adversarial perturbations, which improves the attack success rates of adversarial examples even with fewer perturbations. At the same time, we also theoretically prove that this method can achieve better black-box transferability. Moreover, considering that the best scaling factor varies across different images, we propose an adaptive scaling factor generator to seek an appropriate scaling factor for each image, which avoids the computational cost for manually searching the scaling factor. Our method can be integrated with almost all existing gradient-based attack methods to further improve their attack success rates. Extensive experiments on the CIFAR10 and ImageNet datasets show that our method exhibits higher transferability and outperforms the state-of-the-art methods.

Fig. 1: A two-dimensional toy example to illustrate the difference between our proposed APAA and existing sign-based methods, e.g., BIM [9]. The loss function is composed of a mixture of Gaussian distributions, as described in Eq. (6). The orange path and blue path represent the update process of BIM and our APAA when generating adversarial examples, respectively. The background color represents the contour of the loss function. During the adversarial attack, we aim to achieve an adversarial example with a larger loss value. Due to the limitation of the sign function, there are only eight possible update directions in the case of a two-dimensional space ((0, 1), (0, -1), (1, 1), (1,-1), (1, 0), (-1, -1), (-1, 0), (-1, 1)). The update direction of BIM is limited and not accurate enough, resulting in only reaching the sub-optimal end-point. Our method can not only obtain a more accurate update direction, but also adjust the step size adaptively. As a result, our APAA may reach the global optimum with a larger probability in fewer steps.



Download: