###
计算机系统应用英文版:2022,31(12):203-210
本文二维码信息
码上扫一扫!
基于语义对齐的小样本语义分割模型
(合肥工业大学 计算机与信息学院, 合肥 230009)
Few-shot Semantic Segmentation Model Based on Semantic Alignment
(School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 727次   下载 1778
Received:March 20, 2022    Revised:April 14, 2022
中文摘要: 现实世界的物体图像往往存在较大的类内变化, 使用单一原型描述整个类别会导致语义模糊问题, 为此提出一种基于超像素的多原型生成模块, 利用多个原型分别表示物体的不同语义区域, 通过图神经网络在生成的多个原型间利用上下文信息执行原型校正以保证子原型的正交性. 为了获取到更准确的原型表示, 设计了一种基于Transformer的语义对齐模块, 以挖掘查询图像特征和支持图像的背景特征中蕴含的语义信息, 此外还提出了一种多尺度特征融合结构, 引导模型关注同时出现在支持图像和查询图像中的特征, 提高对物体尺度变化的鲁棒性. 所提出的模型在PASCAL-5i数据集上进行了实验, 与基线模型相比平均交并比提高了6%.
Abstract:Object images in the real world often have large intra-class variations, and thus using a single prototype to describe an entire category will lead to semantic ambiguity. Considering this, a multi-prototype generation module based on superpixels is proposed, which uses multiple prototypes to represent different semantic regions of objects and employs the context to correct prototypes among the generated prototypes by a graph neural network to ensure the orthogonality of the sub-prototypes. To obtain a more accurate prototype representation, a Transformer-based semantic alignment module is designed to mine the semantic information contained in the features of the query images and the background features of the supporting images. In addition, a multi-scale feature fusion structure is proposed to instruct the model to focus on features that appear in both the supporting images and the query images, which can improve the robustness to changes in object scales. The proposed model is tested on the PASCAL-5i dataset, and the mean intersection over union (mIoU) is improved by 6% compared with that of the baseline model.
文章编号:     中图分类号:    文献标志码:
基金项目:国家自然科学基金联合基金 (U20B2044)
引用文本:
张珉,杨娟,汪荣贵.基于语义对齐的小样本语义分割模型.计算机系统应用,2022,31(12):203-210
ZHANG Min,YANG Juan,WANG Rong-Gui.Few-shot Semantic Segmentation Model Based on Semantic Alignment.COMPUTER SYSTEMS APPLICATIONS,2022,31(12):203-210