You can view the full text of this article for free using the link below.

Title: Similarity-based optimised and adaptive adversarial attack on image classification using neural network

Authors: Balika J. Chelliah; Mohammad Mustafa Malik; Ashwin Kumar; Nitin Singh; R. Regin

Addresses: Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu-600 089, India ' Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu-600 089, India ' Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu-600 089, India ' Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu-600 089, India ' Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu-600 089, India

Abstract: Image classification, natural language processing (NLP), and speech recognition have embraced deep learning (DL) techniques. Unrealistic adversarial samples dominate model security research. True hostile attacks are worryingly understudied. These attacks compromise real-world applications. This technique helps comprehend adversarial resistance in real-world challenges. We use real-world cases and data to test whether unreal hostile samples can protect models from genuine samples. Nodal dropouts from the first convolutional layer reveal weak and steady deep-learning neurons. Adversarial targeting links neurons to network adversaries. Neural network adversarial resilience is popular. Its DL network fails to skilfully manipulate input photographs. Our results show that unrealistic examples are as successful as realistic ones or give small enhancements. Second, we investigate the hidden representation of adversarial instances with realistic and unrealistic attacks to explain these results. We showed examples of unrealistic samples used for similar purposes and helped future studies bridge realistic and unrealistic adversarial approaches, and we released the code, datasets, models, and findings.

Keywords: deep neural network; DNN; interactive gradient shielding; generative adversarial networks; adversarial samples.

DOI: 10.1504/IJIEI.2023.130715

International Journal of Intelligent Engineering Informatics, 2023 Vol.11 No.1, pp.71 - 95

Received: 28 Jun 2022
Accepted: 30 Jan 2023

Published online: 03 May 2023 *

Full-text access for editors Full-text access for subscribers Free access Comment on this article