Abstract:Automatic method naming, as an important task in software engineering, aims to generate the target function name for an input source code to enhance the readability of program codes and accelerate software development. Existing automatic method naming approaches based on machine learning mainly encode the source code through sequence models to automatically generate the function name. However, these approaches are confronted with problems of long-term dependency and code structural encoding. To better extract structural and semantic information from programs, we propose a automatic function naming method called TrGCN based on Transformer and Graph Convolutional Network (GCN). In this method, the self-attention mechanism in Transformer is used to alleviate the long-term dependency and the Character-word attention mechanism to extract the semantic information of codes. The TrGCN introduces a GCN-based AST Encoder that enriches the eigenvector information at AST nodes and models the structural information of the source code well. Empirical studies are conducted on three Java datasets. The results show that TrGCN outperforms conventional approaches, namely code2seq and Sequence-GNNs, in automatic method naming as its F1-score is 5.2% and 2.1% higher than the values of the two approaches, respectively.