References
[1]. Ren, K., Zheng, T. and Qin, Z., et al. (2020). Adversarial Attacks and Defenses in Deep Learning. Engineering 6(3), 15.
[2]. Moosavi-Dezfooli, S. M., Fawzi, A. and Fawzi, O., et al. (2017). Universal adversarial perturbations. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
[3]. Liu, Y., Chen, X. and Liu, C., et al. (2016). Delving into Transferable Adversarial Examples and Black-box Attacks. DOI:10.48550/arXiv.1611.02770.
[4]. Szegedy, C., Zaremba, W. and Sutskever, I., et al. (2013). Intriguing properties of neural networks. Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). Neural and Evolutionary Computing (cs.NE). https://doi.org/10.48550/arXiv.1312.6199.
[5]. Papernot, N., Mcdaniel, P. and Wu, X., et al. (2016). Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. 2016 IEEE Symposium on Security and Privacy (SP). IEEE.
[6]. Carlini, N., Mishra, P. and Vaidya, T., et al. (2016). Hidden voice commands. In 25th USENIX Security Symposium (USENIX Security 16), Austin, TX.
[7]. Goodfellow, I., Shlens, J. and Szegedy, C. (2015). Explaining and harnessing adversarial examples. In: Proceedings of 2015 International Conference on Learning Representations (ICLR Poster). San Diego, CA, USA.
[8]. Kurakin, A., Goodfellow, I. and Bengio, S., et al. (2017). Adversarial examples in the physical world. In: Proceedings of 2017 International Conference on Learning Representations (ICLR). Toulon, France, 1-14.
[9]. Dong, Y., Liao, F. and Pang, T., et al. (2018). Boosting adversarial attacks with momentum. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9185-9193.
[10]. Wang, X. and He, K. (2021). Enhancing the transferability of adversarial attacks through variance tuning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1924-1933.
[11]. Wang, G., Yan, H. and Wei, X. (2022). Enhancing Transferability of Adversarial Examples with Spatial Momentum. Accepted as Oral by 5-th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2022. https://doi.org/10.48550/arXiv.2203.13479.
[12]. Moosavi-Dezfooli, S., Fawzi, A. and Frossard, P., et al. (2016). DeepFool: A simple and accurate method to fool deep neural network. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2574-2582. DOI:10.1109/CVPR.2016.282.
[13]. Moosavi-Dezfooli, S. M., Fawzi, A. and Fawzi, O., et al. (2017). Universal adversarial perturbations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1765-1773.
[14]. Xiao, C., Li, B. and Zhu, J. Y., et al. (2018). Generating Adversarial Examples with Adversarial Networks. Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1801.02610.
[15]. Mangla, P., Jandial, S. and Varshney, S., et al. (2019). AdvGAN++: Harnessing latent layers for adversary generation. Accepted at Neural Architects Workshop, ICCV 2019. Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). https://doi.org/10.48550/arXiv.1908.00706.
[16]. Papernot, N., Mcdaniel, P. and Goodfellow, I., et al. (2016). Practical Black-Box Attacks against Machine Learning. ACM.
[17]. Chen, P. Y., Zhang, H. and Sharma, Y., et al. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 15-26.
[18]. Tu, C. C., Ting, P. and Chen, P. Y., et al. (2018). AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks. Computer Vision and Pattern Recognition (cs.CV). Cryptography and Security (cs.CR). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1805.11770.
[19]. Guo, C., Gardner, J. R. and You, Y., et al. (2019). Simple Black-box Adversarial Attacks. Machine Learning (cs.LG). Cryptography and Security (cs.CR). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1905.07121.
[20]. Brendel, W., Rauber, J. and Bethge, M. (2017). Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. Machine Learning (stat.ML). Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). Neural and Evolutionary Computing (cs.NE). https://doi.org/10.48550/arXiv.1712.04248.
[21]. Huang, Z. and Zhang, T. (2019). Black-Box Adversarial Attack with Transferable Model-based Embedding. Machine Learning (cs.LG). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1911.07140.
[22]. Wang, X., Zhang, Z. and Tong, K., et al. (2021). Triangle Attack: A Query-efficient Decision-based Adversarial Attack. Computer Vision and Pattern Recognition (cs.CV). https://doi.org/10.48550/arXiv.2112.06569.
[23]. Yuan, Z., Zhang, J. and Jia, Y., et al. (2021). Meta Gradient Adversarial Attack. Computer Vision and Pattern Recognition (cs.CV). https://doi.org/10.48550/arXiv.2108.04204.
[24]. Zhou, M., Wu, J. and Liu, Y., et al. (2020). DaST: Data-free Substitute T /10.48550raining for Adversarial Attacks. Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). https://doi.org/10.48550/arXiv.2003.12703.
[25]. Ilyas, A., Santurkar, S. and Tsipras, D., et al. (2019). Adversarial Examples Are Not Bugs, They Are Features. Machine Learning (stat.ML). Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). https://doi.org/10.48550/arXiv.1905.02175.
[26]. Wang, W., Yin, B. and Yao, T., et al. (2021). Delving into Data: Effectively Substitute Training for Black-box Attack. Computer Vision and Pattern Recognition (cs.CV). https://doi.org/10.48550/arXiv.2104.12378.
[27]. Su, J., Vargas, D. V. and Kouichi, S. (2017). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23(5), 828-841. https://doi.org/10.48550/arXiv.1710.08864.
Cite this article
Chen,L. (2023). Review on adversarial attack techniques of DNN. Applied and Computational Engineering,17,241-253.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 5th International Conference on Computing and Data Science
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Ren, K., Zheng, T. and Qin, Z., et al. (2020). Adversarial Attacks and Defenses in Deep Learning. Engineering 6(3), 15.
[2]. Moosavi-Dezfooli, S. M., Fawzi, A. and Fawzi, O., et al. (2017). Universal adversarial perturbations. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
[3]. Liu, Y., Chen, X. and Liu, C., et al. (2016). Delving into Transferable Adversarial Examples and Black-box Attacks. DOI:10.48550/arXiv.1611.02770.
[4]. Szegedy, C., Zaremba, W. and Sutskever, I., et al. (2013). Intriguing properties of neural networks. Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). Neural and Evolutionary Computing (cs.NE). https://doi.org/10.48550/arXiv.1312.6199.
[5]. Papernot, N., Mcdaniel, P. and Wu, X., et al. (2016). Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. 2016 IEEE Symposium on Security and Privacy (SP). IEEE.
[6]. Carlini, N., Mishra, P. and Vaidya, T., et al. (2016). Hidden voice commands. In 25th USENIX Security Symposium (USENIX Security 16), Austin, TX.
[7]. Goodfellow, I., Shlens, J. and Szegedy, C. (2015). Explaining and harnessing adversarial examples. In: Proceedings of 2015 International Conference on Learning Representations (ICLR Poster). San Diego, CA, USA.
[8]. Kurakin, A., Goodfellow, I. and Bengio, S., et al. (2017). Adversarial examples in the physical world. In: Proceedings of 2017 International Conference on Learning Representations (ICLR). Toulon, France, 1-14.
[9]. Dong, Y., Liao, F. and Pang, T., et al. (2018). Boosting adversarial attacks with momentum. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9185-9193.
[10]. Wang, X. and He, K. (2021). Enhancing the transferability of adversarial attacks through variance tuning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1924-1933.
[11]. Wang, G., Yan, H. and Wei, X. (2022). Enhancing Transferability of Adversarial Examples with Spatial Momentum. Accepted as Oral by 5-th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2022. https://doi.org/10.48550/arXiv.2203.13479.
[12]. Moosavi-Dezfooli, S., Fawzi, A. and Frossard, P., et al. (2016). DeepFool: A simple and accurate method to fool deep neural network. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2574-2582. DOI:10.1109/CVPR.2016.282.
[13]. Moosavi-Dezfooli, S. M., Fawzi, A. and Fawzi, O., et al. (2017). Universal adversarial perturbations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1765-1773.
[14]. Xiao, C., Li, B. and Zhu, J. Y., et al. (2018). Generating Adversarial Examples with Adversarial Networks. Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1801.02610.
[15]. Mangla, P., Jandial, S. and Varshney, S., et al. (2019). AdvGAN++: Harnessing latent layers for adversary generation. Accepted at Neural Architects Workshop, ICCV 2019. Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). https://doi.org/10.48550/arXiv.1908.00706.
[16]. Papernot, N., Mcdaniel, P. and Goodfellow, I., et al. (2016). Practical Black-Box Attacks against Machine Learning. ACM.
[17]. Chen, P. Y., Zhang, H. and Sharma, Y., et al. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 15-26.
[18]. Tu, C. C., Ting, P. and Chen, P. Y., et al. (2018). AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks. Computer Vision and Pattern Recognition (cs.CV). Cryptography and Security (cs.CR). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1805.11770.
[19]. Guo, C., Gardner, J. R. and You, Y., et al. (2019). Simple Black-box Adversarial Attacks. Machine Learning (cs.LG). Cryptography and Security (cs.CR). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1905.07121.
[20]. Brendel, W., Rauber, J. and Bethge, M. (2017). Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. Machine Learning (stat.ML). Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). Neural and Evolutionary Computing (cs.NE). https://doi.org/10.48550/arXiv.1712.04248.
[21]. Huang, Z. and Zhang, T. (2019). Black-Box Adversarial Attack with Transferable Model-based Embedding. Machine Learning (cs.LG). Machine Learning (stat.ML). https://doi.org/10.48550/arXiv.1911.07140.
[22]. Wang, X., Zhang, Z. and Tong, K., et al. (2021). Triangle Attack: A Query-efficient Decision-based Adversarial Attack. Computer Vision and Pattern Recognition (cs.CV). https://doi.org/10.48550/arXiv.2112.06569.
[23]. Yuan, Z., Zhang, J. and Jia, Y., et al. (2021). Meta Gradient Adversarial Attack. Computer Vision and Pattern Recognition (cs.CV). https://doi.org/10.48550/arXiv.2108.04204.
[24]. Zhou, M., Wu, J. and Liu, Y., et al. (2020). DaST: Data-free Substitute T /10.48550raining for Adversarial Attacks. Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). https://doi.org/10.48550/arXiv.2003.12703.
[25]. Ilyas, A., Santurkar, S. and Tsipras, D., et al. (2019). Adversarial Examples Are Not Bugs, They Are Features. Machine Learning (stat.ML). Cryptography and Security (cs.CR). Computer Vision and Pattern Recognition (cs.CV). Machine Learning (cs.LG). https://doi.org/10.48550/arXiv.1905.02175.
[26]. Wang, W., Yin, B. and Yao, T., et al. (2021). Delving into Data: Effectively Substitute Training for Black-box Attack. Computer Vision and Pattern Recognition (cs.CV). https://doi.org/10.48550/arXiv.2104.12378.
[27]. Su, J., Vargas, D. V. and Kouichi, S. (2017). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23(5), 828-841. https://doi.org/10.48550/arXiv.1710.08864.