Abstract:The graph convolutional network (GCN) is a very important method of processing graph-structured data. The latest research shows that it is highly vulnerable to adversarial attacks, that is, modifying a small amount of data can significantly affect its result. Among all the adversarial attacks on a GCN, there is a special attack method—the universal adversarial attack. This attack can produce disturbances to all samples and cause an erroneous GCN result. This study mainly studies targeted universal adversarial attacks and proposes a GTUA by adding gradient selection to the existing algorithm TUA. The experimental results of three popular datasets show that only in a few classes, the method proposed in this study has the same results as those of the existing methods. In most classes, the method proposed in this study is superior to the existing ones. The average attack success rate (ASR) is improved by 1.7%.