Neural Architecture Dilation for Adversarial Robustness
用于对抗鲁棒性的神经结构扩展
敵対的ロバスト性のためのニューラルアーキテクチャ拡張
노 봉 성에 대항 하 는 신경 구조 확장 에 사용 된다
Propagación de la estructura Neural para contrarrestar la robustez
Extension de la structure neuronale pour lutter contre la robustesse
распространение нервной структуры для противодействия
Yanxi Li ¹, Zhaohui Yang ² ³, Yunhe Wang 王云鹤 ², Chang Xu ¹
¹ School of Computer Science, University of Sydney, Australia
² Noah’s Ark Lab, Huawei Technologies, China
中国 香港 华为诺亚方舟实验室
³ Key Lab of Machine Perception (MOE), Department of Machine Intelligence, Peking University, China
中国 北京 北京大学机器感知与智能教育部重点实验室
With the tremendous advances in the architecture and scale of convolutional neural networks (CNNs) over the past few decades, they can easily reach or even exceed the performance of humans in certain tasks. However, a recently discovered shortcoming of CNNs is that they are vulnerable to adversarial attacks. Although the adversarial robustness of CNNs can be improved by adversarial training, there is a trade-off between standard accuracy and adversarial robustness.
From the neural architecture perspective, this paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy. Under a minimal computational overhead, the introduction of a dilation architecture is expected to be friendly with the standard performance of the backbone CNN while pursuing adversarial robustness. Theoretical analyses on the standard and adversarial error bounds naturally motivate the proposed neural architecture dilation algorithm. Experimental results on real-world datasets and benchmark neural networks demonstrate the effectiveness of the proposed algorithm to balance the accuracy and adversarial robustness.