Title: EIUAPA: an efficient and imperceptible universal adversarial attack on audio classification models
Authors: Huifeng Li; Pengzhou Jia; Weixun Li; Bin Ma; Bo Li; Dexin Wu; Haoran Li
Addresses: State Grid Hebei Electric Power Research Institute, Shijiazhuang, Hebei, China; State Grid Handan Electric Power Supply Company, Handan, Hebei, China ' State Grid Hebei Electric Power Research Institute, Shijiazhuang, Hebei, China; State Grid Handan Electric Power Supply Company, Handan, Hebei, China ' State Grid Hebei Electric Power Co., Ltd., Shijiazhuang, Hebei, China ' State Grid Hebei Electric Power Co., Ltd., Shijiazhuang, Hebei, China ' NARI Group Corporation (State Grid Electronic Power Research Institute), Nanjing, Jiangsu, China; Beijing Kedong Electric Power Control System Co., Ltd., Beijing, China ' Software College, Northeastern University, Shenyang, Liaoning, China ' Software College, Northeastern University, Shenyang, Liaoning, China
Abstract: The domain of audio classification models is emerging as a significant paradigm, albeit susceptible to universal adversarial attacks. These attacks involve the insertion of a single optimal perturbation into all audio samples, leading to incorrect predictions. Nonetheless, existing attack methodologies are hindered by inefficiencies and imperceptibility challenges. In order to streamline the attack process effectively, we propose a two-step strategy EIUAPA that offers an optimal initiation point for the perturbation optimisation process, resulting in a notable decrease in generation time. To maintain imperceptibility, we present a range of metrics focusing on perturbation concealment, serving as benchmarks for optimisation. These metrics ensure that perturbations are not only concealed in the frequency and time domains but also remain statistically indistinguishable. Experimental results demonstrate that our method generates UAPs 87.5% and 86.8% faster than baseline methods, with improved signal-to-noise ratio (SNR) and attack success rate (ASR) scores.
Keywords: adversarial attack; artificial intelligence; security and privacy; audio classification; deep learning.
DOI: 10.1504/IJCSE.2025.147609
International Journal of Computational Science and Engineering, 2025 Vol.28 No.4, pp.434 - 445
Received: 28 Feb 2024
Accepted: 11 Jun 2024
Published online: 24 Jul 2025 *