Gradient-free adversarial attack algorithm based on differential evolution Online publication date: Tue, 16-Jan-2024
by Qingan Da; Guoyin Zhang; Sizhao Li; Zechao Liu; Wenshan Wang
International Journal of Bio-Inspired Computation (IJBIC), Vol. 22, No. 4, 2023
Abstract: Deep learning models are susceptible to adversarial examples even in the black-box setting. This means there are security risks in intelligent systems based on deep learning. Research on adversarial attacks is crucial to improving the robustness of deep learning models. Most of the existing algorithms are query-intensive and require models to provide more detailed results. We focus on a restrictive threat model and propose a gradient-free adversarial attack algorithm based on differential evolution. In particular, we design two fitness functions to achieve targeted attacks and non-targeted attacks. And we introduce an elimination mechanism in the selection phase to speed up the convergence of the algorithm. Experiments on MNIST, CIFAR-10, and ImageNet show the effectiveness of the proposed method. The comparison with C&W, ZOO and GenAttack shows our method has better advantages in the attack success rate, the number of queries required for a successful attack, and the information obtained in a single query.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Bio-Inspired Computation (IJBIC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com