A New Method to Detect the Adversarial Attack Based on the Residual Image

Feng Sun,
Zhenjiang Zhang,
Yi-Chih Kao,
Tianzhou Li,
Bo Shen,

Abstract


Nowadays, with the development of artificial intelligence, deep learning has attracted more and more attention. Whereas deep neural network has made incredible progress in many domains including Computer Vision, Nature Language Processing, etc, recent studies show that they are vulnerable to the adversarial attacks which takes legitimate images with undetected perturbation as input and can mislead the model to predict incorrect outputs. We consider that the key point of the adversarial attack is the undetected perturbation added to the input. It will be of great significance to eliminate the effect of the added noise. Thus, we design a new, efficient model based on residual image which can detect this potential adversarial attack. We design a method to get the residual image which can capture these possible perturbations. Based on the residual image we got, the detection mechanism can help us detect whether it is an adversarial image or not. A serial of experiments has also been carried out. Subsequent experiments prove that the new detection method can detect the adversarial attack with high effectivity.


Citation Format:
Feng Sun, Zhenjiang Zhang, Yi-Chih Kao, Tianzhou Li, Bo Shen, "A New Method to Detect the Adversarial Attack Based on the Residual Image," Journal of Internet Technology, vol. 20, no. 4 , pp. 1297-1304, Jul. 2019.

Full Text:

PDF

Refbacks

  • There are currently no refbacks.





Published by Executive Committee, Taiwan Academic Network, Ministry of Education, Taipei, Taiwan, R.O.C
JIT Editorial Office, Office of Library and Information Services, National Dong Hwa University
No. 1, Sec. 2, Da Hsueh Rd., Shoufeng, Hualien 974301, Taiwan, R.O.C.
Tel: +886-3-931-7314  E-mail: jit.editorial@gmail.com