Detecting audio adversarial examples for protecting speech-to-text transcription neural networks Online publication date: Wed, 02-Jun-2021
by Keiichi Tamura; Akitada Omagari; Hajime Ito; Shuichi Hashida
International Journal of Computational Intelligence Studies (IJCISTUDIES), Vol. 10, No. 2/3, 2021
Abstract: With the increasing use of deep learning techniques in real-world applications, their vulnerabilities have received significant attention from deep-learning researchers and practitioners. In particular, adversarial examples for deep neural networks and protection methods against them have been well-studied in recent years because they have serious vulnerabilities that threaten safety in the real-world. Audio adversarial examples, which are targeted attacks, are designed such that the deep neural network-based speech-to-text systems misunderstand input voice sound. In this study, we propose a new protection method against audio adversarial examples. The proposed protection method is based on a sandbox approach, where an input voice sound is checked in the system to determine if it is an audio adversarial example. To evaluate the proposed protection method, we used actual audio adversarial examples created on deep speech, which is a typical speech-to-text transcription neural network. The experimental results show that our protection method can detect audio adversarial examples with high accuracy.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computational Intelligence Studies (IJCISTUDIES):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com