本文已被:浏览 830次 下载 1820次
Received:November 05, 2022 Revised:December 10, 2022
Received:November 05, 2022 Revised:December 10, 2022
中文摘要: 源代码注释生成旨在为源代码生成精确的自然语言注释, 帮助开发者更好地理解和维护源代码. 传统的研究方法利用信息检索技术来生成源代码摘要, 从初始源代码选择相应的词或者改写相似代码段的摘要; 最近的研究采用机器翻译的方法, 选择编码器-解码器的神经网络模型生成代码段的摘要. 现有的注释生成方法主要存在两个问题: 一方面, 基于神经网络的方法对于代码段中出现的高频词更加友好, 但是往往会弱化低频词的处理; 另一方面, 编程语言是高度结构化的, 所以不能简单地将源代码作为序列化文本处理, 容易造成上下文结构信息丢失. 因此, 本文为了解决低频词问题提出了基于检索的神经机器翻译方法, 使用训练集中检索到的相似代码段来增强神经网络模型; 为了学习代码段的结构化语义信息, 本文提出结构化引导的Transformer, 该模型通过注意力机制将代码结构信息进行编码. 经过实验, 结果证明该模型在低频词和结构化语义的处理上对比当下前沿的代码注释生成的深度学习模型具有显著的优势.
Abstract:Source code summarization is designed to automatically generate precise summarization for natural language, so as to help developers better understand and maintain source code. Traditional research methods generate source code summaries by using information retrieval techniques, which select corresponding words from the original source code or adapt summaries of similar code snippets; recent research adopts machine translation methods and generates summaries of code snippets by selecting the encoder-decoder neural network model. However, there are two main problems in existing summarization generation methods: on the one hand, the neural network-based method is more friendly to the high-frequency words appearing in the code snippets, but it tends to weaken the processing of low-frequency words; on the other hand, programming languages ??are highly structured, so source code cannot simply be treated as serialized text, or otherwise, it will lead to loss of contextual structure information. Therefore, in order to solve the problem of low-frequency words, a retrieval-based neural machine translation approach is proposed. Similar code snippets retrieved from the training set are used to enhance the neural network model. In addition, to learn the structured semantic information of code snippets, this study proposes a structured-guided Transformer, which encodes structural information of codes through an attention mechanism. The experimental results show that the model has significant advantages over the deep learning model generated by the current cutting-edge code summarization in processing low-frequency words and structured semantics.
keywords: code summarization abstract syntax tree (AST) Transformer semantic similarity self-attention mechanism programming comprehension
文章编号: 中图分类号: 文献标志码:
基金项目:国家自然科学基金(61972197); 江苏省自然科学基金 (BK20201292)
引用文本:
沈鑫,周宇.基于神经网络和信息检索的源代码注释生成.计算机系统应用,2023,32(7):1-10
SHEN Xin,ZHOU Yu.Source Code Summarization Based on Neural Network and Information Retrieval.COMPUTER SYSTEMS APPLICATIONS,2023,32(7):1-10
沈鑫,周宇.基于神经网络和信息检索的源代码注释生成.计算机系统应用,2023,32(7):1-10
SHEN Xin,ZHOU Yu.Source Code Summarization Based on Neural Network and Information Retrieval.COMPUTER SYSTEMS APPLICATIONS,2023,32(7):1-10