###
计算机系统应用英文版:2023,32(7):276-283
本文二维码信息
码上扫一扫!
基于梯度结构的图神经网络对抗攻击
(南京航空航天大学 计算机科学与技术学院, 南京 211106)
Gradient-structure-based Adversarial Attacks on Graph Neural Network
(College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 388次   下载 970
Received:December 17, 2022    Revised:February 03, 2023
中文摘要: 图神经网络在半监督节点分类任务中取得了显著的性能. 研究表明, 图神经网络容易受到干扰, 因此目前已有研究涉及图神经网络的对抗鲁棒性. 然而, 基于梯度的攻击不能保证最优的扰动. 提出了一种基于梯度和结构的对抗性攻击方法, 增强了基于梯度的扰动. 该方法首先利用训练损失的一阶优化生成候选扰动集, 然后对候选集进行相似性评估, 根据评估结果排序并选择固定预算的修改以实现攻击. 通过在5个数据集上进行半监督节点分类任务来评估所提出的攻击方法. 实验结果表明, 在仅执行少量扰动的情况下, 节点分类精度显著下降, 明显优于现有攻击方法.
Abstract:Graph neural networks have achieved remarkable performance in semi-supervised node classification tasks. Relevant research has shown that graph neural networks are susceptible to perturbations, and there is research studying the adversarial robustness of graph neural networks. However, gradient-based attacks cannot guarantee optimal perturbation. Therefore, an adversarial attack method based on gradient and structure is proposed to enhance the gradient-based perturbation. The method first generates candidate perturbation sets by using first-order optimization of training losses, and then it evaluates the similarity of the candidate sets. Finally, it ranks them according to the evaluation results and selects a fixed-budget modification to achieve the attack. The proposed attack method is evaluated by performing a semi-supervised node classification task on five datasets. Experimental results show that the node classification accuracy decreases significantly when only a small number of perturbations are performed, which indicates that the proposed method significantly outperforms the existing attack methods.
文章编号:     中图分类号:    文献标志码:
基金项目:国防基础科研计划(JCKY2020204C009)
引用文本:
李凝书,关东海,袁伟伟.基于梯度结构的图神经网络对抗攻击.计算机系统应用,2023,32(7):276-283
LI Ning-Shu,GUAN Dong-Hai,YUAN Wei-Wei.Gradient-structure-based Adversarial Attacks on Graph Neural Network.COMPUTER SYSTEMS APPLICATIONS,2023,32(7):276-283